00:00:00.001 Started by upstream project "autotest-per-patch" build number 131950 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.083 The recommended git tool is: git 00:00:00.083 using credential 00000000-0000-0000-0000-000000000002 00:00:00.087 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.123 Fetching changes from the remote Git repository 00:00:00.126 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.165 Using shallow fetch with depth 1 00:00:00.165 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.165 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.212 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.212 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.842 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.854 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.867 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:05.867 > git config core.sparsecheckout # timeout=10 00:00:05.878 > git read-tree -mu HEAD # timeout=10 00:00:05.893 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:05.911 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:05.911 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.012 [Pipeline] Start of Pipeline 00:00:06.025 [Pipeline] library 00:00:06.027 Loading library shm_lib@master 00:00:06.027 Library shm_lib@master is cached. Copying from home. 00:00:06.045 [Pipeline] node 00:00:06.053 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.056 [Pipeline] { 00:00:06.067 [Pipeline] catchError 00:00:06.069 [Pipeline] { 00:00:06.081 [Pipeline] wrap 00:00:06.087 [Pipeline] { 00:00:06.093 [Pipeline] stage 00:00:06.095 [Pipeline] { (Prologue) 00:00:06.284 [Pipeline] sh 00:00:06.568 + logger -p user.info -t JENKINS-CI 00:00:06.585 [Pipeline] echo 00:00:06.587 Node: GP11 00:00:06.596 [Pipeline] sh 00:00:06.898 [Pipeline] setCustomBuildProperty 00:00:06.911 [Pipeline] echo 00:00:06.913 Cleanup processes 00:00:06.918 [Pipeline] sh 00:00:07.203 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.203 410470 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.214 [Pipeline] sh 00:00:07.499 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.499 ++ awk '{print $1}' 00:00:07.499 ++ grep -v 'sudo pgrep' 00:00:07.499 + sudo kill -9 00:00:07.499 + true 00:00:07.513 [Pipeline] cleanWs 00:00:07.523 [WS-CLEANUP] Deleting project workspace... 00:00:07.524 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.529 [WS-CLEANUP] done 00:00:07.533 [Pipeline] setCustomBuildProperty 00:00:07.547 [Pipeline] sh 00:00:07.830 + sudo git config --global --replace-all safe.directory '*' 00:00:07.931 [Pipeline] httpRequest 00:00:08.319 [Pipeline] echo 00:00:08.321 Sorcerer 10.211.164.101 is alive 00:00:08.331 [Pipeline] retry 00:00:08.333 [Pipeline] { 00:00:08.350 [Pipeline] httpRequest 00:00:08.355 HttpMethod: GET 00:00:08.355 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:08.356 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:08.378 Response Code: HTTP/1.1 200 OK 00:00:08.379 Success: Status code 200 is in the accepted range: 200,404 00:00:08.379 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:16.906 [Pipeline] } 00:00:16.923 [Pipeline] // retry 00:00:16.931 [Pipeline] sh 00:00:17.218 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:17.237 [Pipeline] httpRequest 00:00:17.657 [Pipeline] echo 00:00:17.659 Sorcerer 10.211.164.101 is alive 00:00:17.668 [Pipeline] retry 00:00:17.670 [Pipeline] { 00:00:17.685 [Pipeline] httpRequest 00:00:17.689 HttpMethod: GET 00:00:17.690 URL: http://10.211.164.101/packages/spdk_0a41b9e4e130f4bd16efe5b9bc7b310242002f11.tar.gz 00:00:17.691 Sending request to url: http://10.211.164.101/packages/spdk_0a41b9e4e130f4bd16efe5b9bc7b310242002f11.tar.gz 00:00:17.713 Response Code: HTTP/1.1 200 OK 00:00:17.714 Success: Status code 200 is in the accepted range: 200,404 00:00:17.714 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0a41b9e4e130f4bd16efe5b9bc7b310242002f11.tar.gz 00:01:58.835 [Pipeline] } 00:01:58.853 [Pipeline] // retry 00:01:58.860 [Pipeline] sh 00:01:59.156 + tar --no-same-owner -xf spdk_0a41b9e4e130f4bd16efe5b9bc7b310242002f11.tar.gz 00:02:01.711 [Pipeline] sh 00:02:01.999 + git -C spdk log --oneline -n5 00:02:01.999 0a41b9e4e nvmf: rename passthrough_nsid -> passthru_nsid 00:02:01.999 a77e23853 nvmf: use bdev's nsid for admin command passthru 00:02:01.999 568b24fde nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:02:01.999 d631ca103 bdev: add spdk_bdev_get_nvme_nsid() 00:02:01.999 12fc2abf1 test: Remove autopackage.sh 00:02:02.011 [Pipeline] } 00:02:02.027 [Pipeline] // stage 00:02:02.037 [Pipeline] stage 00:02:02.039 [Pipeline] { (Prepare) 00:02:02.056 [Pipeline] writeFile 00:02:02.072 [Pipeline] sh 00:02:02.359 + logger -p user.info -t JENKINS-CI 00:02:02.372 [Pipeline] sh 00:02:02.658 + logger -p user.info -t JENKINS-CI 00:02:02.672 [Pipeline] sh 00:02:02.958 + cat autorun-spdk.conf 00:02:02.958 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.958 SPDK_TEST_NVMF=1 00:02:02.958 SPDK_TEST_NVME_CLI=1 00:02:02.958 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.958 SPDK_TEST_NVMF_NICS=e810 00:02:02.958 SPDK_TEST_VFIOUSER=1 00:02:02.958 SPDK_RUN_UBSAN=1 00:02:02.958 NET_TYPE=phy 00:02:02.966 RUN_NIGHTLY=0 00:02:02.972 [Pipeline] readFile 00:02:02.997 [Pipeline] withEnv 00:02:03.000 [Pipeline] { 00:02:03.012 [Pipeline] sh 00:02:03.301 + set -ex 00:02:03.301 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:03.301 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:03.301 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.301 ++ SPDK_TEST_NVMF=1 00:02:03.301 ++ SPDK_TEST_NVME_CLI=1 00:02:03.301 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.301 ++ SPDK_TEST_NVMF_NICS=e810 00:02:03.301 ++ SPDK_TEST_VFIOUSER=1 00:02:03.301 ++ SPDK_RUN_UBSAN=1 00:02:03.301 ++ NET_TYPE=phy 00:02:03.301 ++ RUN_NIGHTLY=0 00:02:03.301 + case $SPDK_TEST_NVMF_NICS in 00:02:03.301 + DRIVERS=ice 00:02:03.301 + [[ tcp == \r\d\m\a ]] 00:02:03.301 + [[ -n ice ]] 00:02:03.301 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:03.301 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:03.301 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:03.301 rmmod: ERROR: Module irdma is not currently loaded 00:02:03.301 rmmod: ERROR: Module i40iw is not currently loaded 00:02:03.301 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:03.301 + true 00:02:03.301 + for D in $DRIVERS 00:02:03.301 + sudo modprobe ice 00:02:03.301 + exit 0 00:02:03.311 [Pipeline] } 00:02:03.326 [Pipeline] // withEnv 00:02:03.332 [Pipeline] } 00:02:03.346 [Pipeline] // stage 00:02:03.356 [Pipeline] catchError 00:02:03.358 [Pipeline] { 00:02:03.374 [Pipeline] timeout 00:02:03.374 Timeout set to expire in 1 hr 0 min 00:02:03.376 [Pipeline] { 00:02:03.392 [Pipeline] stage 00:02:03.394 [Pipeline] { (Tests) 00:02:03.408 [Pipeline] sh 00:02:03.695 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:03.695 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:03.695 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:03.695 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:03.695 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:03.695 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:03.695 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:03.695 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:03.695 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:03.695 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:03.695 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:03.695 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:03.695 + source /etc/os-release 00:02:03.695 ++ NAME='Fedora Linux' 00:02:03.695 ++ VERSION='39 (Cloud Edition)' 00:02:03.695 ++ ID=fedora 00:02:03.695 ++ VERSION_ID=39 00:02:03.695 ++ VERSION_CODENAME= 00:02:03.695 ++ PLATFORM_ID=platform:f39 00:02:03.695 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:03.695 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.695 ++ LOGO=fedora-logo-icon 00:02:03.695 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:03.695 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.695 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:03.695 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.695 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.695 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.695 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:03.695 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.695 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:03.695 ++ SUPPORT_END=2024-11-12 00:02:03.695 ++ VARIANT='Cloud Edition' 00:02:03.695 ++ VARIANT_ID=cloud 00:02:03.695 + uname -a 00:02:03.695 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:03.695 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:04.634 Hugepages 00:02:04.634 node hugesize free / total 00:02:04.634 node0 1048576kB 0 / 0 00:02:04.634 node0 2048kB 0 / 0 00:02:04.634 node1 1048576kB 0 / 0 00:02:04.634 node1 2048kB 0 / 0 00:02:04.634 00:02:04.634 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:04.634 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:04.634 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:04.634 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:04.634 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:04.634 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:04.634 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:04.634 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:04.634 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:04.634 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:04.634 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:04.634 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:04.634 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:04.634 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:04.634 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:04.634 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:04.893 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:04.893 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:04.893 + rm -f /tmp/spdk-ld-path 00:02:04.893 + source autorun-spdk.conf 00:02:04.893 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.893 ++ SPDK_TEST_NVMF=1 00:02:04.893 ++ SPDK_TEST_NVME_CLI=1 00:02:04.893 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:04.893 ++ SPDK_TEST_NVMF_NICS=e810 00:02:04.893 ++ SPDK_TEST_VFIOUSER=1 00:02:04.893 ++ SPDK_RUN_UBSAN=1 00:02:04.893 ++ NET_TYPE=phy 00:02:04.893 ++ RUN_NIGHTLY=0 00:02:04.893 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:04.893 + [[ -n '' ]] 00:02:04.893 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:04.893 + for M in /var/spdk/build-*-manifest.txt 00:02:04.893 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:04.893 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:04.893 + for M in /var/spdk/build-*-manifest.txt 00:02:04.893 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:04.893 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:04.893 + for M in /var/spdk/build-*-manifest.txt 00:02:04.893 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:04.893 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:04.893 ++ uname 00:02:04.893 + [[ Linux == \L\i\n\u\x ]] 00:02:04.893 + sudo dmesg -T 00:02:04.893 + sudo dmesg --clear 00:02:04.893 + dmesg_pid=411767 00:02:04.893 + [[ Fedora Linux == FreeBSD ]] 00:02:04.893 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.893 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.893 + sudo dmesg -Tw 00:02:04.893 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:04.893 + [[ -x /usr/src/fio-static/fio ]] 00:02:04.893 + export FIO_BIN=/usr/src/fio-static/fio 00:02:04.893 + FIO_BIN=/usr/src/fio-static/fio 00:02:04.893 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:04.893 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:04.893 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:04.893 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.893 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.893 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:04.893 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.893 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.893 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:04.893 12:13:37 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:04.893 12:13:37 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:04.893 12:13:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.893 12:13:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:04.893 12:13:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:04.893 12:13:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:04.893 12:13:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:04.894 12:13:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:04.894 12:13:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:04.894 12:13:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:04.894 12:13:37 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:04.894 12:13:37 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:04.894 12:13:37 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:04.894 12:13:37 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:04.894 12:13:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:04.894 12:13:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:04.894 12:13:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:04.894 12:13:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:04.894 12:13:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:04.894 12:13:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.894 12:13:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.894 12:13:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.894 12:13:37 -- paths/export.sh@5 -- $ export PATH 00:02:04.894 12:13:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.894 12:13:37 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:04.894 12:13:37 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:04.894 12:13:37 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730286817.XXXXXX 00:02:04.894 12:13:37 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730286817.n5cQ0V 00:02:04.894 12:13:37 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:04.894 12:13:37 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:04.894 12:13:37 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:04.894 12:13:37 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:04.894 12:13:37 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:04.894 12:13:37 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:04.894 12:13:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:04.894 12:13:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.894 12:13:37 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:04.894 12:13:37 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:04.894 12:13:37 -- pm/common@17 -- $ local monitor 00:02:04.894 12:13:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.894 12:13:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.152 12:13:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.152 12:13:37 -- pm/common@21 -- $ date +%s 00:02:05.152 12:13:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.152 12:13:37 -- pm/common@21 -- $ date +%s 00:02:05.152 12:13:37 -- pm/common@25 -- $ sleep 1 00:02:05.152 12:13:37 -- pm/common@21 -- $ date +%s 00:02:05.152 12:13:37 -- pm/common@21 -- $ date +%s 00:02:05.152 12:13:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730286817 00:02:05.152 12:13:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730286817 00:02:05.152 12:13:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730286817 00:02:05.153 12:13:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730286817 00:02:05.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730286817_collect-vmstat.pm.log 00:02:05.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730286817_collect-cpu-load.pm.log 00:02:05.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730286817_collect-cpu-temp.pm.log 00:02:05.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730286817_collect-bmc-pm.bmc.pm.log 00:02:06.090 12:13:38 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:06.090 12:13:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:06.090 12:13:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:06.090 12:13:38 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.090 12:13:38 -- spdk/autobuild.sh@16 -- $ date -u 00:02:06.090 Wed Oct 30 11:13:38 AM UTC 2024 00:02:06.090 12:13:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:06.090 v25.01-pre-127-g0a41b9e4e 00:02:06.090 12:13:38 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:06.090 12:13:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:06.090 12:13:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:06.090 12:13:38 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:06.090 12:13:38 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:06.090 12:13:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.090 ************************************ 00:02:06.090 START TEST ubsan 00:02:06.090 ************************************ 00:02:06.090 12:13:38 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:06.090 using ubsan 00:02:06.090 00:02:06.090 real 0m0.000s 00:02:06.090 user 0m0.000s 00:02:06.090 sys 0m0.000s 00:02:06.090 12:13:38 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:06.090 12:13:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.090 ************************************ 00:02:06.090 END TEST ubsan 00:02:06.090 ************************************ 00:02:06.090 12:13:38 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:06.090 12:13:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:06.090 12:13:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:06.090 12:13:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:06.090 12:13:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:06.090 12:13:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:06.090 12:13:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:06.090 12:13:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:06.090 12:13:38 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:06.090 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:06.090 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:06.350 Using 'verbs' RDMA provider 00:02:17.297 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:27.289 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:27.289 Creating mk/config.mk...done. 00:02:27.289 Creating mk/cc.flags.mk...done. 00:02:27.289 Type 'make' to build. 00:02:27.289 12:13:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:02:27.289 12:13:59 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:27.289 12:13:59 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:27.289 12:13:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.289 ************************************ 00:02:27.289 START TEST make 00:02:27.289 ************************************ 00:02:27.289 12:13:59 make -- common/autotest_common.sh@1127 -- $ make -j48 00:02:27.289 make[1]: Nothing to be done for 'all'. 00:02:29.212 The Meson build system 00:02:29.212 Version: 1.5.0 00:02:29.212 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:29.212 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:29.212 Build type: native build 00:02:29.212 Project name: libvfio-user 00:02:29.212 Project version: 0.0.1 00:02:29.212 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:29.212 C linker for the host machine: cc ld.bfd 2.40-14 00:02:29.212 Host machine cpu family: x86_64 00:02:29.212 Host machine cpu: x86_64 00:02:29.212 Run-time dependency threads found: YES 00:02:29.212 Library dl found: YES 00:02:29.212 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:29.212 Run-time dependency json-c found: YES 0.17 00:02:29.212 Run-time dependency cmocka found: YES 1.1.7 00:02:29.212 Program pytest-3 found: NO 00:02:29.212 Program flake8 found: NO 00:02:29.212 Program misspell-fixer found: NO 00:02:29.212 Program restructuredtext-lint found: NO 00:02:29.212 Program valgrind found: YES (/usr/bin/valgrind) 00:02:29.212 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:29.212 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:29.212 Compiler for C supports arguments -Wwrite-strings: YES 00:02:29.212 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:29.212 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:29.213 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:29.213 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:29.213 Build targets in project: 8 00:02:29.213 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:29.213 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:29.213 00:02:29.213 libvfio-user 0.0.1 00:02:29.213 00:02:29.213 User defined options 00:02:29.213 buildtype : debug 00:02:29.213 default_library: shared 00:02:29.213 libdir : /usr/local/lib 00:02:29.213 00:02:29.213 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.796 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:30.059 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:30.059 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:30.059 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:30.059 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:30.059 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:30.059 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:30.059 [7/37] Compiling C object samples/null.p/null.c.o 00:02:30.059 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:30.059 [9/37] Compiling C object samples/server.p/server.c.o 00:02:30.059 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:30.059 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:30.059 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:30.059 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:30.059 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:30.059 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:30.059 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:30.059 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:30.059 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:30.059 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:30.059 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:30.059 [21/37] Compiling C object samples/client.p/client.c.o 00:02:30.059 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:30.322 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:30.322 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:30.322 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:30.322 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:30.322 [27/37] Linking target samples/client 00:02:30.322 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:30.322 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:30.322 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:30.322 [31/37] Linking target test/unit_tests 00:02:30.587 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:30.587 [33/37] Linking target samples/server 00:02:30.587 [34/37] Linking target samples/gpio-pci-idio-16 00:02:30.587 [35/37] Linking target samples/lspci 00:02:30.587 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:30.587 [37/37] Linking target samples/null 00:02:30.587 INFO: autodetecting backend as ninja 00:02:30.587 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:30.849 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:31.791 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:31.791 ninja: no work to do. 00:02:35.979 The Meson build system 00:02:35.979 Version: 1.5.0 00:02:35.979 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:35.979 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:35.979 Build type: native build 00:02:35.979 Program cat found: YES (/usr/bin/cat) 00:02:35.979 Project name: DPDK 00:02:35.979 Project version: 24.03.0 00:02:35.979 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:35.979 C linker for the host machine: cc ld.bfd 2.40-14 00:02:35.979 Host machine cpu family: x86_64 00:02:35.979 Host machine cpu: x86_64 00:02:35.979 Message: ## Building in Developer Mode ## 00:02:35.979 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:35.979 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:35.979 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:35.979 Program python3 found: YES (/usr/bin/python3) 00:02:35.979 Program cat found: YES (/usr/bin/cat) 00:02:35.979 Compiler for C supports arguments -march=native: YES 00:02:35.979 Checking for size of "void *" : 8 00:02:35.979 Checking for size of "void *" : 8 (cached) 00:02:35.979 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:35.979 Library m found: YES 00:02:35.979 Library numa found: YES 00:02:35.979 Has header "numaif.h" : YES 00:02:35.979 Library fdt found: NO 00:02:35.979 Library execinfo found: NO 00:02:35.979 Has header "execinfo.h" : YES 00:02:35.979 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:35.979 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:35.979 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:35.979 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:35.979 Run-time dependency openssl found: YES 3.1.1 00:02:35.979 Run-time dependency libpcap found: YES 1.10.4 00:02:35.979 Has header "pcap.h" with dependency libpcap: YES 00:02:35.979 Compiler for C supports arguments -Wcast-qual: YES 00:02:35.979 Compiler for C supports arguments -Wdeprecated: YES 00:02:35.979 Compiler for C supports arguments -Wformat: YES 00:02:35.979 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:35.979 Compiler for C supports arguments -Wformat-security: NO 00:02:35.979 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:35.979 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:35.979 Compiler for C supports arguments -Wnested-externs: YES 00:02:35.979 Compiler for C supports arguments -Wold-style-definition: YES 00:02:35.979 Compiler for C supports arguments -Wpointer-arith: YES 00:02:35.979 Compiler for C supports arguments -Wsign-compare: YES 00:02:35.979 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:35.979 Compiler for C supports arguments -Wundef: YES 00:02:35.979 Compiler for C supports arguments -Wwrite-strings: YES 00:02:35.979 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:35.979 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:35.979 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:35.979 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:35.979 Program objdump found: YES (/usr/bin/objdump) 00:02:35.979 Compiler for C supports arguments -mavx512f: YES 00:02:35.979 Checking if "AVX512 checking" compiles: YES 00:02:35.979 Fetching value of define "__SSE4_2__" : 1 00:02:35.979 Fetching value of define "__AES__" : 1 00:02:35.979 Fetching value of define "__AVX__" : 1 00:02:35.979 Fetching value of define "__AVX2__" : (undefined) 00:02:35.979 Fetching value of define "__AVX512BW__" : (undefined) 00:02:35.979 Fetching value of define "__AVX512CD__" : (undefined) 00:02:35.979 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:35.979 Fetching value of define "__AVX512F__" : (undefined) 00:02:35.979 Fetching value of define "__AVX512VL__" : (undefined) 00:02:35.979 Fetching value of define "__PCLMUL__" : 1 00:02:35.979 Fetching value of define "__RDRND__" : 1 00:02:35.979 Fetching value of define "__RDSEED__" : (undefined) 00:02:35.979 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:35.979 Fetching value of define "__znver1__" : (undefined) 00:02:35.979 Fetching value of define "__znver2__" : (undefined) 00:02:35.979 Fetching value of define "__znver3__" : (undefined) 00:02:35.979 Fetching value of define "__znver4__" : (undefined) 00:02:35.979 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:35.979 Message: lib/log: Defining dependency "log" 00:02:35.979 Message: lib/kvargs: Defining dependency "kvargs" 00:02:35.979 Message: lib/telemetry: Defining dependency "telemetry" 00:02:35.979 Checking for function "getentropy" : NO 00:02:35.979 Message: lib/eal: Defining dependency "eal" 00:02:35.979 Message: lib/ring: Defining dependency "ring" 00:02:35.979 Message: lib/rcu: Defining dependency "rcu" 00:02:35.979 Message: lib/mempool: Defining dependency "mempool" 00:02:35.979 Message: lib/mbuf: Defining dependency "mbuf" 00:02:35.979 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:35.979 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:35.979 Compiler for C supports arguments -mpclmul: YES 00:02:35.979 Compiler for C supports arguments -maes: YES 00:02:35.979 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.979 Compiler for C supports arguments -mavx512bw: YES 00:02:35.979 Compiler for C supports arguments -mavx512dq: YES 00:02:35.979 Compiler for C supports arguments -mavx512vl: YES 00:02:35.979 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:35.979 Compiler for C supports arguments -mavx2: YES 00:02:35.979 Compiler for C supports arguments -mavx: YES 00:02:35.979 Message: lib/net: Defining dependency "net" 00:02:35.979 Message: lib/meter: Defining dependency "meter" 00:02:35.979 Message: lib/ethdev: Defining dependency "ethdev" 00:02:35.979 Message: lib/pci: Defining dependency "pci" 00:02:35.979 Message: lib/cmdline: Defining dependency "cmdline" 00:02:35.979 Message: lib/hash: Defining dependency "hash" 00:02:35.979 Message: lib/timer: Defining dependency "timer" 00:02:35.979 Message: lib/compressdev: Defining dependency "compressdev" 00:02:35.979 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:35.979 Message: lib/dmadev: Defining dependency "dmadev" 00:02:35.979 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:35.979 Message: lib/power: Defining dependency "power" 00:02:35.979 Message: lib/reorder: Defining dependency "reorder" 00:02:35.979 Message: lib/security: Defining dependency "security" 00:02:35.979 Has header "linux/userfaultfd.h" : YES 00:02:35.979 Has header "linux/vduse.h" : YES 00:02:35.979 Message: lib/vhost: Defining dependency "vhost" 00:02:35.979 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:35.979 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:35.979 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:35.979 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:35.979 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:35.979 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:35.979 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:35.979 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:35.979 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:35.979 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:35.979 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:35.979 Configuring doxy-api-html.conf using configuration 00:02:35.979 Configuring doxy-api-man.conf using configuration 00:02:35.979 Program mandb found: YES (/usr/bin/mandb) 00:02:35.979 Program sphinx-build found: NO 00:02:35.979 Configuring rte_build_config.h using configuration 00:02:35.979 Message: 00:02:35.979 ================= 00:02:35.979 Applications Enabled 00:02:35.979 ================= 00:02:35.979 00:02:35.979 apps: 00:02:35.979 00:02:35.979 00:02:35.979 Message: 00:02:35.979 ================= 00:02:35.979 Libraries Enabled 00:02:35.979 ================= 00:02:35.979 00:02:35.979 libs: 00:02:35.979 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:35.979 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:35.979 cryptodev, dmadev, power, reorder, security, vhost, 00:02:35.979 00:02:35.979 Message: 00:02:35.979 =============== 00:02:35.979 Drivers Enabled 00:02:35.979 =============== 00:02:35.979 00:02:35.979 common: 00:02:35.979 00:02:35.979 bus: 00:02:35.979 pci, vdev, 00:02:35.979 mempool: 00:02:35.979 ring, 00:02:35.979 dma: 00:02:35.979 00:02:35.979 net: 00:02:35.979 00:02:35.979 crypto: 00:02:35.979 00:02:35.979 compress: 00:02:35.979 00:02:35.979 vdpa: 00:02:35.979 00:02:35.979 00:02:35.979 Message: 00:02:35.979 ================= 00:02:35.979 Content Skipped 00:02:35.979 ================= 00:02:35.979 00:02:35.979 apps: 00:02:35.979 dumpcap: explicitly disabled via build config 00:02:35.979 graph: explicitly disabled via build config 00:02:35.979 pdump: explicitly disabled via build config 00:02:35.979 proc-info: explicitly disabled via build config 00:02:35.979 test-acl: explicitly disabled via build config 00:02:35.979 test-bbdev: explicitly disabled via build config 00:02:35.979 test-cmdline: explicitly disabled via build config 00:02:35.979 test-compress-perf: explicitly disabled via build config 00:02:35.979 test-crypto-perf: explicitly disabled via build config 00:02:35.979 test-dma-perf: explicitly disabled via build config 00:02:35.979 test-eventdev: explicitly disabled via build config 00:02:35.979 test-fib: explicitly disabled via build config 00:02:35.979 test-flow-perf: explicitly disabled via build config 00:02:35.979 test-gpudev: explicitly disabled via build config 00:02:35.979 test-mldev: explicitly disabled via build config 00:02:35.979 test-pipeline: explicitly disabled via build config 00:02:35.979 test-pmd: explicitly disabled via build config 00:02:35.979 test-regex: explicitly disabled via build config 00:02:35.979 test-sad: explicitly disabled via build config 00:02:35.979 test-security-perf: explicitly disabled via build config 00:02:35.979 00:02:35.979 libs: 00:02:35.979 argparse: explicitly disabled via build config 00:02:35.979 metrics: explicitly disabled via build config 00:02:35.979 acl: explicitly disabled via build config 00:02:35.979 bbdev: explicitly disabled via build config 00:02:35.979 bitratestats: explicitly disabled via build config 00:02:35.979 bpf: explicitly disabled via build config 00:02:35.979 cfgfile: explicitly disabled via build config 00:02:35.980 distributor: explicitly disabled via build config 00:02:35.980 efd: explicitly disabled via build config 00:02:35.980 eventdev: explicitly disabled via build config 00:02:35.980 dispatcher: explicitly disabled via build config 00:02:35.980 gpudev: explicitly disabled via build config 00:02:35.980 gro: explicitly disabled via build config 00:02:35.980 gso: explicitly disabled via build config 00:02:35.980 ip_frag: explicitly disabled via build config 00:02:35.980 jobstats: explicitly disabled via build config 00:02:35.980 latencystats: explicitly disabled via build config 00:02:35.980 lpm: explicitly disabled via build config 00:02:35.980 member: explicitly disabled via build config 00:02:35.980 pcapng: explicitly disabled via build config 00:02:35.980 rawdev: explicitly disabled via build config 00:02:35.980 regexdev: explicitly disabled via build config 00:02:35.980 mldev: explicitly disabled via build config 00:02:35.980 rib: explicitly disabled via build config 00:02:35.980 sched: explicitly disabled via build config 00:02:35.980 stack: explicitly disabled via build config 00:02:35.980 ipsec: explicitly disabled via build config 00:02:35.980 pdcp: explicitly disabled via build config 00:02:35.980 fib: explicitly disabled via build config 00:02:35.980 port: explicitly disabled via build config 00:02:35.980 pdump: explicitly disabled via build config 00:02:35.980 table: explicitly disabled via build config 00:02:35.980 pipeline: explicitly disabled via build config 00:02:35.980 graph: explicitly disabled via build config 00:02:35.980 node: explicitly disabled via build config 00:02:35.980 00:02:35.980 drivers: 00:02:35.980 common/cpt: not in enabled drivers build config 00:02:35.980 common/dpaax: not in enabled drivers build config 00:02:35.980 common/iavf: not in enabled drivers build config 00:02:35.980 common/idpf: not in enabled drivers build config 00:02:35.980 common/ionic: not in enabled drivers build config 00:02:35.980 common/mvep: not in enabled drivers build config 00:02:35.980 common/octeontx: not in enabled drivers build config 00:02:35.980 bus/auxiliary: not in enabled drivers build config 00:02:35.980 bus/cdx: not in enabled drivers build config 00:02:35.980 bus/dpaa: not in enabled drivers build config 00:02:35.980 bus/fslmc: not in enabled drivers build config 00:02:35.980 bus/ifpga: not in enabled drivers build config 00:02:35.980 bus/platform: not in enabled drivers build config 00:02:35.980 bus/uacce: not in enabled drivers build config 00:02:35.980 bus/vmbus: not in enabled drivers build config 00:02:35.980 common/cnxk: not in enabled drivers build config 00:02:35.980 common/mlx5: not in enabled drivers build config 00:02:35.980 common/nfp: not in enabled drivers build config 00:02:35.980 common/nitrox: not in enabled drivers build config 00:02:35.980 common/qat: not in enabled drivers build config 00:02:35.980 common/sfc_efx: not in enabled drivers build config 00:02:35.980 mempool/bucket: not in enabled drivers build config 00:02:35.980 mempool/cnxk: not in enabled drivers build config 00:02:35.980 mempool/dpaa: not in enabled drivers build config 00:02:35.980 mempool/dpaa2: not in enabled drivers build config 00:02:35.980 mempool/octeontx: not in enabled drivers build config 00:02:35.980 mempool/stack: not in enabled drivers build config 00:02:35.980 dma/cnxk: not in enabled drivers build config 00:02:35.980 dma/dpaa: not in enabled drivers build config 00:02:35.980 dma/dpaa2: not in enabled drivers build config 00:02:35.980 dma/hisilicon: not in enabled drivers build config 00:02:35.980 dma/idxd: not in enabled drivers build config 00:02:35.980 dma/ioat: not in enabled drivers build config 00:02:35.980 dma/skeleton: not in enabled drivers build config 00:02:35.980 net/af_packet: not in enabled drivers build config 00:02:35.980 net/af_xdp: not in enabled drivers build config 00:02:35.980 net/ark: not in enabled drivers build config 00:02:35.980 net/atlantic: not in enabled drivers build config 00:02:35.980 net/avp: not in enabled drivers build config 00:02:35.980 net/axgbe: not in enabled drivers build config 00:02:35.980 net/bnx2x: not in enabled drivers build config 00:02:35.980 net/bnxt: not in enabled drivers build config 00:02:35.980 net/bonding: not in enabled drivers build config 00:02:35.980 net/cnxk: not in enabled drivers build config 00:02:35.980 net/cpfl: not in enabled drivers build config 00:02:35.980 net/cxgbe: not in enabled drivers build config 00:02:35.980 net/dpaa: not in enabled drivers build config 00:02:35.980 net/dpaa2: not in enabled drivers build config 00:02:35.980 net/e1000: not in enabled drivers build config 00:02:35.980 net/ena: not in enabled drivers build config 00:02:35.980 net/enetc: not in enabled drivers build config 00:02:35.980 net/enetfec: not in enabled drivers build config 00:02:35.980 net/enic: not in enabled drivers build config 00:02:35.980 net/failsafe: not in enabled drivers build config 00:02:35.980 net/fm10k: not in enabled drivers build config 00:02:35.980 net/gve: not in enabled drivers build config 00:02:35.980 net/hinic: not in enabled drivers build config 00:02:35.980 net/hns3: not in enabled drivers build config 00:02:35.980 net/i40e: not in enabled drivers build config 00:02:35.980 net/iavf: not in enabled drivers build config 00:02:35.980 net/ice: not in enabled drivers build config 00:02:35.980 net/idpf: not in enabled drivers build config 00:02:35.980 net/igc: not in enabled drivers build config 00:02:35.980 net/ionic: not in enabled drivers build config 00:02:35.980 net/ipn3ke: not in enabled drivers build config 00:02:35.980 net/ixgbe: not in enabled drivers build config 00:02:35.980 net/mana: not in enabled drivers build config 00:02:35.980 net/memif: not in enabled drivers build config 00:02:35.980 net/mlx4: not in enabled drivers build config 00:02:35.980 net/mlx5: not in enabled drivers build config 00:02:35.980 net/mvneta: not in enabled drivers build config 00:02:35.980 net/mvpp2: not in enabled drivers build config 00:02:35.980 net/netvsc: not in enabled drivers build config 00:02:35.980 net/nfb: not in enabled drivers build config 00:02:35.980 net/nfp: not in enabled drivers build config 00:02:35.980 net/ngbe: not in enabled drivers build config 00:02:35.980 net/null: not in enabled drivers build config 00:02:35.980 net/octeontx: not in enabled drivers build config 00:02:35.980 net/octeon_ep: not in enabled drivers build config 00:02:35.980 net/pcap: not in enabled drivers build config 00:02:35.980 net/pfe: not in enabled drivers build config 00:02:35.980 net/qede: not in enabled drivers build config 00:02:35.980 net/ring: not in enabled drivers build config 00:02:35.980 net/sfc: not in enabled drivers build config 00:02:35.980 net/softnic: not in enabled drivers build config 00:02:35.980 net/tap: not in enabled drivers build config 00:02:35.980 net/thunderx: not in enabled drivers build config 00:02:35.980 net/txgbe: not in enabled drivers build config 00:02:35.980 net/vdev_netvsc: not in enabled drivers build config 00:02:35.980 net/vhost: not in enabled drivers build config 00:02:35.980 net/virtio: not in enabled drivers build config 00:02:35.980 net/vmxnet3: not in enabled drivers build config 00:02:35.980 raw/*: missing internal dependency, "rawdev" 00:02:35.980 crypto/armv8: not in enabled drivers build config 00:02:35.980 crypto/bcmfs: not in enabled drivers build config 00:02:35.980 crypto/caam_jr: not in enabled drivers build config 00:02:35.980 crypto/ccp: not in enabled drivers build config 00:02:35.980 crypto/cnxk: not in enabled drivers build config 00:02:35.980 crypto/dpaa_sec: not in enabled drivers build config 00:02:35.980 crypto/dpaa2_sec: not in enabled drivers build config 00:02:35.980 crypto/ipsec_mb: not in enabled drivers build config 00:02:35.980 crypto/mlx5: not in enabled drivers build config 00:02:35.980 crypto/mvsam: not in enabled drivers build config 00:02:35.980 crypto/nitrox: not in enabled drivers build config 00:02:35.980 crypto/null: not in enabled drivers build config 00:02:35.980 crypto/octeontx: not in enabled drivers build config 00:02:35.980 crypto/openssl: not in enabled drivers build config 00:02:35.980 crypto/scheduler: not in enabled drivers build config 00:02:35.980 crypto/uadk: not in enabled drivers build config 00:02:35.980 crypto/virtio: not in enabled drivers build config 00:02:35.980 compress/isal: not in enabled drivers build config 00:02:35.980 compress/mlx5: not in enabled drivers build config 00:02:35.980 compress/nitrox: not in enabled drivers build config 00:02:35.980 compress/octeontx: not in enabled drivers build config 00:02:35.980 compress/zlib: not in enabled drivers build config 00:02:35.980 regex/*: missing internal dependency, "regexdev" 00:02:35.980 ml/*: missing internal dependency, "mldev" 00:02:35.980 vdpa/ifc: not in enabled drivers build config 00:02:35.980 vdpa/mlx5: not in enabled drivers build config 00:02:35.980 vdpa/nfp: not in enabled drivers build config 00:02:35.980 vdpa/sfc: not in enabled drivers build config 00:02:35.980 event/*: missing internal dependency, "eventdev" 00:02:35.980 baseband/*: missing internal dependency, "bbdev" 00:02:35.980 gpu/*: missing internal dependency, "gpudev" 00:02:35.980 00:02:35.980 00:02:36.547 Build targets in project: 85 00:02:36.547 00:02:36.547 DPDK 24.03.0 00:02:36.547 00:02:36.547 User defined options 00:02:36.547 buildtype : debug 00:02:36.547 default_library : shared 00:02:36.547 libdir : lib 00:02:36.547 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:36.547 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:36.547 c_link_args : 00:02:36.547 cpu_instruction_set: native 00:02:36.547 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:36.547 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:36.547 enable_docs : false 00:02:36.547 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:36.547 enable_kmods : false 00:02:36.547 max_lcores : 128 00:02:36.547 tests : false 00:02:36.547 00:02:36.547 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.122 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:37.122 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.122 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.122 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.122 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.122 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.122 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.122 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:37.122 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.122 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.122 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:37.122 [11/268] Linking static target lib/librte_kvargs.a 00:02:37.122 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.122 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:37.122 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:37.122 [15/268] Linking static target lib/librte_log.a 00:02:37.385 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.645 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.906 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.906 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.906 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:37.906 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:37.906 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.906 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.906 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.906 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:37.906 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:37.906 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.906 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:37.906 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:37.906 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.906 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:37.906 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:37.906 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:38.168 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:38.168 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:38.168 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.168 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:38.168 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.168 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:38.168 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.168 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.168 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.168 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.168 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:38.168 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.168 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.168 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.168 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.168 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.168 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.168 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:38.168 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:38.168 [53/268] Linking static target lib/librte_telemetry.a 00:02:38.168 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:38.168 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.168 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.168 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.168 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:38.168 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.168 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:38.168 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.429 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:38.429 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.429 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:38.429 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.429 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.719 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.719 [68/268] Linking target lib/librte_log.so.24.1 00:02:38.719 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:38.719 [70/268] Linking static target lib/librte_pci.a 00:02:38.719 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:38.719 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.979 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.979 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.979 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.979 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.979 [77/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:38.979 [78/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.979 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:38.979 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.979 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:38.979 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.979 [83/268] Linking target lib/librte_kvargs.so.24.1 00:02:38.979 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:38.979 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.979 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:38.979 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.979 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:38.979 [89/268] Linking static target lib/librte_ring.a 00:02:38.979 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.979 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.979 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.979 [93/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:38.979 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:38.979 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.979 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:38.979 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:38.979 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.979 [99/268] Linking static target lib/librte_meter.a 00:02:38.979 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.979 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.979 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.242 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:39.242 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.242 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.242 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.242 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.242 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.242 [109/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.242 [110/268] Linking static target lib/librte_eal.a 00:02:39.242 [111/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.242 [112/268] Linking static target lib/librte_rcu.a 00:02:39.242 [113/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:39.242 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.242 [115/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:39.242 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.242 [117/268] Linking target lib/librte_telemetry.so.24.1 00:02:39.242 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.242 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:39.242 [120/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:39.242 [121/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.242 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:39.242 [123/268] Linking static target lib/librte_mempool.a 00:02:39.503 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:39.503 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.503 [126/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.503 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:39.503 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.503 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.503 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:39.504 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:39.504 [132/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:39.504 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:39.504 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.770 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.770 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.770 [137/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.770 [138/268] Linking static target lib/librte_net.a 00:02:39.770 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.770 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:39.770 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:39.770 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.029 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:40.029 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:40.029 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.029 [146/268] Linking static target lib/librte_cmdline.a 00:02:40.029 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:40.029 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.029 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:40.029 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.029 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.029 [152/268] Linking static target lib/librte_timer.a 00:02:40.029 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:40.029 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:40.029 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:40.288 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:40.288 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.288 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:40.288 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:40.288 [160/268] Linking static target lib/librte_dmadev.a 00:02:40.288 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:40.288 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:40.288 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:40.288 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:40.288 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:40.546 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:40.546 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:40.546 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:40.546 [169/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.546 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:40.546 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:40.546 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.546 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:40.546 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:40.546 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:40.546 [176/268] Linking static target lib/librte_compressdev.a 00:02:40.546 [177/268] Linking static target lib/librte_power.a 00:02:40.546 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:40.546 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:40.546 [180/268] Linking static target lib/librte_hash.a 00:02:40.546 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.804 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:40.804 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:40.804 [184/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.804 [185/268] Linking static target lib/librte_mbuf.a 00:02:40.804 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:40.804 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.804 [188/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:40.804 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:40.804 [190/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:40.804 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.804 [192/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.804 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:40.804 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:40.804 [195/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:40.804 [196/268] Linking static target lib/librte_reorder.a 00:02:41.061 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:41.062 [198/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.062 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.062 [200/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.062 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:41.062 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.062 [203/268] Linking static target lib/librte_security.a 00:02:41.062 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.062 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.062 [206/268] Linking static target drivers/librte_bus_vdev.a 00:02:41.062 [207/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.062 [208/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.062 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.062 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.062 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.062 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.062 [213/268] Linking static target drivers/librte_bus_pci.a 00:02:41.062 [214/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.319 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.319 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:41.319 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.319 [218/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.319 [219/268] Linking static target drivers/librte_mempool_ring.a 00:02:41.319 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.319 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:41.319 [222/268] Linking static target lib/librte_cryptodev.a 00:02:41.319 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.577 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.577 [225/268] Linking static target lib/librte_ethdev.a 00:02:41.577 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.511 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.886 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.262 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.262 [230/268] Linking target lib/librte_eal.so.24.1 00:02:45.519 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:45.519 [232/268] Linking target lib/librte_ring.so.24.1 00:02:45.519 [233/268] Linking target lib/librte_timer.so.24.1 00:02:45.519 [234/268] Linking target lib/librte_meter.so.24.1 00:02:45.519 [235/268] Linking target lib/librte_pci.so.24.1 00:02:45.519 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:45.519 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:45.519 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.519 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:45.519 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:45.519 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:45.519 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:45.519 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:45.519 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:45.519 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:45.519 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:45.777 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:45.777 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:45.777 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:45.777 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:46.036 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:46.036 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:46.036 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:46.036 [254/268] Linking target lib/librte_net.so.24.1 00:02:46.036 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:46.036 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:46.036 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:46.036 [258/268] Linking target lib/librte_hash.so.24.1 00:02:46.036 [259/268] Linking target lib/librte_security.so.24.1 00:02:46.036 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:46.036 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:46.292 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:46.292 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:46.292 [264/268] Linking target lib/librte_power.so.24.1 00:02:50.476 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:50.476 [266/268] Linking static target lib/librte_vhost.a 00:02:51.042 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.042 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:51.042 INFO: autodetecting backend as ninja 00:02:51.042 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:12.967 CC lib/ut_mock/mock.o 00:03:12.967 CC lib/ut/ut.o 00:03:12.967 CC lib/log/log.o 00:03:12.967 CC lib/log/log_flags.o 00:03:12.967 CC lib/log/log_deprecated.o 00:03:12.967 LIB libspdk_ut.a 00:03:12.967 LIB libspdk_ut_mock.a 00:03:12.967 LIB libspdk_log.a 00:03:12.967 SO libspdk_ut.so.2.0 00:03:12.967 SO libspdk_ut_mock.so.6.0 00:03:12.967 SO libspdk_log.so.7.1 00:03:12.967 SYMLINK libspdk_ut_mock.so 00:03:12.967 SYMLINK libspdk_ut.so 00:03:12.967 SYMLINK libspdk_log.so 00:03:12.967 CC lib/ioat/ioat.o 00:03:12.967 CXX lib/trace_parser/trace.o 00:03:12.967 CC lib/util/base64.o 00:03:12.967 CC lib/dma/dma.o 00:03:12.967 CC lib/util/bit_array.o 00:03:12.967 CC lib/util/cpuset.o 00:03:12.967 CC lib/util/crc16.o 00:03:12.967 CC lib/util/crc32.o 00:03:12.967 CC lib/util/crc32c.o 00:03:12.967 CC lib/util/crc32_ieee.o 00:03:12.967 CC lib/util/crc64.o 00:03:12.967 CC lib/util/dif.o 00:03:12.967 CC lib/util/fd.o 00:03:12.967 CC lib/util/fd_group.o 00:03:12.967 CC lib/util/file.o 00:03:12.967 CC lib/util/hexlify.o 00:03:12.967 CC lib/util/iov.o 00:03:12.967 CC lib/util/math.o 00:03:12.967 CC lib/util/net.o 00:03:12.967 CC lib/util/pipe.o 00:03:12.967 CC lib/util/string.o 00:03:12.967 CC lib/util/strerror_tls.o 00:03:12.967 CC lib/util/uuid.o 00:03:12.967 CC lib/util/zipf.o 00:03:12.967 CC lib/util/xor.o 00:03:12.967 CC lib/util/md5.o 00:03:12.967 CC lib/vfio_user/host/vfio_user_pci.o 00:03:12.967 CC lib/vfio_user/host/vfio_user.o 00:03:12.967 LIB libspdk_dma.a 00:03:12.967 SO libspdk_dma.so.5.0 00:03:12.967 SYMLINK libspdk_dma.so 00:03:12.967 LIB libspdk_ioat.a 00:03:12.967 LIB libspdk_vfio_user.a 00:03:12.967 SO libspdk_ioat.so.7.0 00:03:12.967 SO libspdk_vfio_user.so.5.0 00:03:12.967 SYMLINK libspdk_ioat.so 00:03:12.967 SYMLINK libspdk_vfio_user.so 00:03:12.967 LIB libspdk_util.a 00:03:12.967 SO libspdk_util.so.10.0 00:03:12.967 SYMLINK libspdk_util.so 00:03:12.967 CC lib/conf/conf.o 00:03:12.967 CC lib/rdma_provider/common.o 00:03:12.967 CC lib/json/json_parse.o 00:03:12.968 CC lib/env_dpdk/env.o 00:03:12.968 CC lib/idxd/idxd.o 00:03:12.968 CC lib/json/json_util.o 00:03:12.968 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:12.968 CC lib/env_dpdk/memory.o 00:03:12.968 CC lib/idxd/idxd_user.o 00:03:12.968 CC lib/vmd/led.o 00:03:12.968 CC lib/rdma_utils/rdma_utils.o 00:03:12.968 CC lib/vmd/vmd.o 00:03:12.968 CC lib/json/json_write.o 00:03:12.968 CC lib/idxd/idxd_kernel.o 00:03:12.968 CC lib/env_dpdk/pci.o 00:03:12.968 CC lib/env_dpdk/init.o 00:03:12.968 CC lib/env_dpdk/threads.o 00:03:12.968 CC lib/env_dpdk/pci_ioat.o 00:03:12.968 CC lib/env_dpdk/pci_virtio.o 00:03:12.968 CC lib/env_dpdk/pci_vmd.o 00:03:12.968 CC lib/env_dpdk/pci_idxd.o 00:03:12.968 CC lib/env_dpdk/pci_event.o 00:03:12.968 CC lib/env_dpdk/sigbus_handler.o 00:03:12.968 CC lib/env_dpdk/pci_dpdk.o 00:03:12.968 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:12.968 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:12.968 LIB libspdk_trace_parser.a 00:03:12.968 SO libspdk_trace_parser.so.6.0 00:03:12.968 SYMLINK libspdk_trace_parser.so 00:03:12.968 LIB libspdk_rdma_provider.a 00:03:12.968 SO libspdk_rdma_provider.so.6.0 00:03:12.968 LIB libspdk_conf.a 00:03:12.968 SO libspdk_conf.so.6.0 00:03:12.968 SYMLINK libspdk_rdma_provider.so 00:03:12.968 LIB libspdk_json.a 00:03:12.968 SYMLINK libspdk_conf.so 00:03:12.968 SO libspdk_json.so.6.0 00:03:12.968 LIB libspdk_rdma_utils.a 00:03:12.968 SYMLINK libspdk_json.so 00:03:12.968 SO libspdk_rdma_utils.so.1.0 00:03:12.968 SYMLINK libspdk_rdma_utils.so 00:03:12.968 CC lib/jsonrpc/jsonrpc_server.o 00:03:12.968 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:12.968 CC lib/jsonrpc/jsonrpc_client.o 00:03:12.968 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:12.968 LIB libspdk_idxd.a 00:03:12.968 SO libspdk_idxd.so.12.1 00:03:12.968 SYMLINK libspdk_idxd.so 00:03:12.968 LIB libspdk_vmd.a 00:03:12.968 SO libspdk_vmd.so.6.0 00:03:12.968 LIB libspdk_jsonrpc.a 00:03:13.226 SYMLINK libspdk_vmd.so 00:03:13.226 SO libspdk_jsonrpc.so.6.0 00:03:13.226 SYMLINK libspdk_jsonrpc.so 00:03:13.226 CC lib/rpc/rpc.o 00:03:13.485 LIB libspdk_rpc.a 00:03:13.485 SO libspdk_rpc.so.6.0 00:03:13.743 SYMLINK libspdk_rpc.so 00:03:13.743 CC lib/keyring/keyring.o 00:03:13.743 CC lib/keyring/keyring_rpc.o 00:03:13.743 CC lib/trace/trace.o 00:03:13.743 CC lib/notify/notify.o 00:03:13.743 CC lib/trace/trace_flags.o 00:03:13.743 CC lib/notify/notify_rpc.o 00:03:13.743 CC lib/trace/trace_rpc.o 00:03:14.001 LIB libspdk_notify.a 00:03:14.001 SO libspdk_notify.so.6.0 00:03:14.001 SYMLINK libspdk_notify.so 00:03:14.001 LIB libspdk_keyring.a 00:03:14.001 LIB libspdk_trace.a 00:03:14.001 SO libspdk_keyring.so.2.0 00:03:14.001 SO libspdk_trace.so.11.0 00:03:14.001 SYMLINK libspdk_keyring.so 00:03:14.259 SYMLINK libspdk_trace.so 00:03:14.259 LIB libspdk_env_dpdk.a 00:03:14.259 CC lib/thread/thread.o 00:03:14.259 CC lib/thread/iobuf.o 00:03:14.259 CC lib/sock/sock.o 00:03:14.259 CC lib/sock/sock_rpc.o 00:03:14.259 SO libspdk_env_dpdk.so.15.1 00:03:14.518 SYMLINK libspdk_env_dpdk.so 00:03:14.776 LIB libspdk_sock.a 00:03:14.776 SO libspdk_sock.so.10.0 00:03:14.776 SYMLINK libspdk_sock.so 00:03:15.035 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:15.035 CC lib/nvme/nvme_ctrlr.o 00:03:15.035 CC lib/nvme/nvme_fabric.o 00:03:15.035 CC lib/nvme/nvme_ns_cmd.o 00:03:15.035 CC lib/nvme/nvme_ns.o 00:03:15.035 CC lib/nvme/nvme_pcie_common.o 00:03:15.035 CC lib/nvme/nvme_pcie.o 00:03:15.035 CC lib/nvme/nvme_qpair.o 00:03:15.035 CC lib/nvme/nvme.o 00:03:15.035 CC lib/nvme/nvme_quirks.o 00:03:15.035 CC lib/nvme/nvme_transport.o 00:03:15.035 CC lib/nvme/nvme_discovery.o 00:03:15.035 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:15.035 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:15.035 CC lib/nvme/nvme_tcp.o 00:03:15.035 CC lib/nvme/nvme_opal.o 00:03:15.035 CC lib/nvme/nvme_io_msg.o 00:03:15.035 CC lib/nvme/nvme_poll_group.o 00:03:15.035 CC lib/nvme/nvme_zns.o 00:03:15.035 CC lib/nvme/nvme_stubs.o 00:03:15.035 CC lib/nvme/nvme_auth.o 00:03:15.035 CC lib/nvme/nvme_cuse.o 00:03:15.035 CC lib/nvme/nvme_vfio_user.o 00:03:15.035 CC lib/nvme/nvme_rdma.o 00:03:16.411 LIB libspdk_thread.a 00:03:16.411 SO libspdk_thread.so.11.0 00:03:16.411 SYMLINK libspdk_thread.so 00:03:16.411 CC lib/blob/blobstore.o 00:03:16.411 CC lib/fsdev/fsdev.o 00:03:16.411 CC lib/init/json_config.o 00:03:16.411 CC lib/accel/accel.o 00:03:16.411 CC lib/virtio/virtio.o 00:03:16.411 CC lib/blob/request.o 00:03:16.411 CC lib/init/subsystem.o 00:03:16.411 CC lib/vfu_tgt/tgt_endpoint.o 00:03:16.411 CC lib/accel/accel_rpc.o 00:03:16.411 CC lib/fsdev/fsdev_io.o 00:03:16.411 CC lib/virtio/virtio_vhost_user.o 00:03:16.411 CC lib/blob/zeroes.o 00:03:16.411 CC lib/init/subsystem_rpc.o 00:03:16.411 CC lib/vfu_tgt/tgt_rpc.o 00:03:16.412 CC lib/fsdev/fsdev_rpc.o 00:03:16.412 CC lib/accel/accel_sw.o 00:03:16.412 CC lib/virtio/virtio_vfio_user.o 00:03:16.412 CC lib/blob/blob_bs_dev.o 00:03:16.412 CC lib/init/rpc.o 00:03:16.412 CC lib/virtio/virtio_pci.o 00:03:16.670 LIB libspdk_init.a 00:03:16.928 SO libspdk_init.so.6.0 00:03:16.928 LIB libspdk_virtio.a 00:03:16.928 SYMLINK libspdk_init.so 00:03:16.928 SO libspdk_virtio.so.7.0 00:03:16.928 LIB libspdk_vfu_tgt.a 00:03:16.928 SO libspdk_vfu_tgt.so.3.0 00:03:16.928 SYMLINK libspdk_virtio.so 00:03:16.928 SYMLINK libspdk_vfu_tgt.so 00:03:16.928 CC lib/event/app.o 00:03:16.928 CC lib/event/reactor.o 00:03:16.928 CC lib/event/log_rpc.o 00:03:16.928 CC lib/event/app_rpc.o 00:03:16.928 CC lib/event/scheduler_static.o 00:03:17.185 LIB libspdk_fsdev.a 00:03:17.185 SO libspdk_fsdev.so.2.0 00:03:17.185 SYMLINK libspdk_fsdev.so 00:03:17.442 LIB libspdk_nvme.a 00:03:17.442 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:17.442 LIB libspdk_event.a 00:03:17.442 SO libspdk_event.so.14.0 00:03:17.442 SO libspdk_nvme.so.14.1 00:03:17.700 SYMLINK libspdk_event.so 00:03:17.700 LIB libspdk_accel.a 00:03:17.700 SO libspdk_accel.so.16.0 00:03:17.700 SYMLINK libspdk_accel.so 00:03:17.700 SYMLINK libspdk_nvme.so 00:03:17.958 CC lib/bdev/bdev.o 00:03:17.958 CC lib/bdev/bdev_rpc.o 00:03:17.958 CC lib/bdev/bdev_zone.o 00:03:17.958 CC lib/bdev/part.o 00:03:17.958 CC lib/bdev/scsi_nvme.o 00:03:18.216 LIB libspdk_fuse_dispatcher.a 00:03:18.216 SO libspdk_fuse_dispatcher.so.1.0 00:03:18.216 SYMLINK libspdk_fuse_dispatcher.so 00:03:19.592 LIB libspdk_blob.a 00:03:19.592 SO libspdk_blob.so.11.0 00:03:19.850 SYMLINK libspdk_blob.so 00:03:19.850 CC lib/blobfs/blobfs.o 00:03:19.850 CC lib/blobfs/tree.o 00:03:19.850 CC lib/lvol/lvol.o 00:03:20.537 LIB libspdk_bdev.a 00:03:20.537 SO libspdk_bdev.so.17.0 00:03:20.537 SYMLINK libspdk_bdev.so 00:03:20.876 LIB libspdk_blobfs.a 00:03:20.876 SO libspdk_blobfs.so.10.0 00:03:20.876 CC lib/nbd/nbd.o 00:03:20.876 CC lib/nvmf/ctrlr.o 00:03:20.876 CC lib/nbd/nbd_rpc.o 00:03:20.876 CC lib/scsi/dev.o 00:03:20.876 CC lib/nvmf/ctrlr_discovery.o 00:03:20.876 CC lib/scsi/lun.o 00:03:20.876 CC lib/nvmf/ctrlr_bdev.o 00:03:20.876 CC lib/ublk/ublk.o 00:03:20.876 CC lib/scsi/port.o 00:03:20.876 CC lib/ftl/ftl_core.o 00:03:20.876 CC lib/nvmf/subsystem.o 00:03:20.876 CC lib/ublk/ublk_rpc.o 00:03:20.876 CC lib/scsi/scsi.o 00:03:20.876 CC lib/ftl/ftl_init.o 00:03:20.877 CC lib/scsi/scsi_bdev.o 00:03:20.877 CC lib/nvmf/nvmf.o 00:03:20.877 CC lib/ftl/ftl_layout.o 00:03:20.877 CC lib/nvmf/nvmf_rpc.o 00:03:20.877 CC lib/scsi/scsi_pr.o 00:03:20.877 CC lib/ftl/ftl_io.o 00:03:20.877 CC lib/ftl/ftl_debug.o 00:03:20.877 CC lib/nvmf/transport.o 00:03:20.877 CC lib/scsi/scsi_rpc.o 00:03:20.877 CC lib/nvmf/tcp.o 00:03:20.877 CC lib/nvmf/stubs.o 00:03:20.877 CC lib/scsi/task.o 00:03:20.877 CC lib/ftl/ftl_sb.o 00:03:20.877 CC lib/ftl/ftl_l2p.o 00:03:20.877 CC lib/nvmf/mdns_server.o 00:03:20.877 CC lib/ftl/ftl_l2p_flat.o 00:03:20.877 CC lib/nvmf/vfio_user.o 00:03:20.877 CC lib/ftl/ftl_nv_cache.o 00:03:20.877 CC lib/nvmf/rdma.o 00:03:20.877 CC lib/ftl/ftl_band.o 00:03:20.877 CC lib/nvmf/auth.o 00:03:20.877 CC lib/ftl/ftl_band_ops.o 00:03:20.877 CC lib/ftl/ftl_writer.o 00:03:20.877 CC lib/ftl/ftl_rq.o 00:03:20.877 CC lib/ftl/ftl_reloc.o 00:03:20.877 CC lib/ftl/ftl_l2p_cache.o 00:03:20.877 CC lib/ftl/ftl_p2l.o 00:03:20.877 CC lib/ftl/ftl_p2l_log.o 00:03:20.877 CC lib/ftl/mngt/ftl_mngt.o 00:03:20.877 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:20.877 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:20.877 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:20.877 SYMLINK libspdk_blobfs.so 00:03:20.877 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:20.877 LIB libspdk_lvol.a 00:03:20.877 SO libspdk_lvol.so.10.0 00:03:21.149 SYMLINK libspdk_lvol.so 00:03:21.149 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:21.149 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:21.149 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:21.149 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:21.149 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:21.149 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:21.149 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:21.149 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:21.149 CC lib/ftl/utils/ftl_conf.o 00:03:21.149 CC lib/ftl/utils/ftl_md.o 00:03:21.149 CC lib/ftl/utils/ftl_mempool.o 00:03:21.435 CC lib/ftl/utils/ftl_bitmap.o 00:03:21.435 CC lib/ftl/utils/ftl_property.o 00:03:21.435 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:21.435 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:21.435 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:21.435 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:21.435 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:21.435 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:21.435 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:21.435 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:21.435 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:21.435 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:21.435 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:21.435 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:21.435 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:21.435 CC lib/ftl/base/ftl_base_dev.o 00:03:21.700 CC lib/ftl/base/ftl_base_bdev.o 00:03:21.700 CC lib/ftl/ftl_trace.o 00:03:21.700 LIB libspdk_nbd.a 00:03:21.700 SO libspdk_nbd.so.7.0 00:03:21.700 SYMLINK libspdk_nbd.so 00:03:21.700 LIB libspdk_scsi.a 00:03:21.958 SO libspdk_scsi.so.9.0 00:03:21.958 SYMLINK libspdk_scsi.so 00:03:21.958 LIB libspdk_ublk.a 00:03:21.958 SO libspdk_ublk.so.3.0 00:03:21.958 SYMLINK libspdk_ublk.so 00:03:22.216 CC lib/vhost/vhost.o 00:03:22.216 CC lib/iscsi/conn.o 00:03:22.216 CC lib/iscsi/init_grp.o 00:03:22.216 CC lib/vhost/vhost_rpc.o 00:03:22.216 CC lib/iscsi/iscsi.o 00:03:22.216 CC lib/vhost/vhost_scsi.o 00:03:22.216 CC lib/vhost/vhost_blk.o 00:03:22.216 CC lib/iscsi/param.o 00:03:22.216 CC lib/iscsi/portal_grp.o 00:03:22.216 CC lib/vhost/rte_vhost_user.o 00:03:22.216 CC lib/iscsi/tgt_node.o 00:03:22.216 CC lib/iscsi/iscsi_subsystem.o 00:03:22.216 CC lib/iscsi/iscsi_rpc.o 00:03:22.216 CC lib/iscsi/task.o 00:03:22.216 LIB libspdk_ftl.a 00:03:22.475 SO libspdk_ftl.so.9.0 00:03:22.734 SYMLINK libspdk_ftl.so 00:03:23.299 LIB libspdk_vhost.a 00:03:23.299 SO libspdk_vhost.so.8.0 00:03:23.300 LIB libspdk_nvmf.a 00:03:23.559 SYMLINK libspdk_vhost.so 00:03:23.559 SO libspdk_nvmf.so.20.0 00:03:23.559 LIB libspdk_iscsi.a 00:03:23.559 SO libspdk_iscsi.so.8.0 00:03:23.559 SYMLINK libspdk_nvmf.so 00:03:23.818 SYMLINK libspdk_iscsi.so 00:03:24.077 CC module/env_dpdk/env_dpdk_rpc.o 00:03:24.077 CC module/vfu_device/vfu_virtio.o 00:03:24.077 CC module/vfu_device/vfu_virtio_blk.o 00:03:24.077 CC module/vfu_device/vfu_virtio_scsi.o 00:03:24.077 CC module/vfu_device/vfu_virtio_rpc.o 00:03:24.077 CC module/vfu_device/vfu_virtio_fs.o 00:03:24.077 CC module/blob/bdev/blob_bdev.o 00:03:24.077 CC module/accel/dsa/accel_dsa.o 00:03:24.077 CC module/keyring/linux/keyring.o 00:03:24.077 CC module/accel/dsa/accel_dsa_rpc.o 00:03:24.077 CC module/scheduler/gscheduler/gscheduler.o 00:03:24.077 CC module/sock/posix/posix.o 00:03:24.077 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:24.077 CC module/keyring/linux/keyring_rpc.o 00:03:24.077 CC module/keyring/file/keyring.o 00:03:24.077 CC module/keyring/file/keyring_rpc.o 00:03:24.077 CC module/fsdev/aio/fsdev_aio.o 00:03:24.077 CC module/accel/iaa/accel_iaa.o 00:03:24.077 CC module/accel/ioat/accel_ioat.o 00:03:24.077 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:24.077 CC module/accel/ioat/accel_ioat_rpc.o 00:03:24.077 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:24.077 CC module/accel/error/accel_error.o 00:03:24.077 CC module/accel/error/accel_error_rpc.o 00:03:24.077 CC module/fsdev/aio/linux_aio_mgr.o 00:03:24.077 CC module/accel/iaa/accel_iaa_rpc.o 00:03:24.077 LIB libspdk_env_dpdk_rpc.a 00:03:24.077 SO libspdk_env_dpdk_rpc.so.6.0 00:03:24.336 SYMLINK libspdk_env_dpdk_rpc.so 00:03:24.336 LIB libspdk_keyring_linux.a 00:03:24.336 LIB libspdk_keyring_file.a 00:03:24.336 LIB libspdk_scheduler_gscheduler.a 00:03:24.336 LIB libspdk_scheduler_dpdk_governor.a 00:03:24.336 SO libspdk_keyring_linux.so.1.0 00:03:24.336 SO libspdk_keyring_file.so.2.0 00:03:24.336 SO libspdk_scheduler_gscheduler.so.4.0 00:03:24.336 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:24.336 LIB libspdk_accel_ioat.a 00:03:24.336 LIB libspdk_scheduler_dynamic.a 00:03:24.336 LIB libspdk_accel_iaa.a 00:03:24.336 SO libspdk_scheduler_dynamic.so.4.0 00:03:24.336 SO libspdk_accel_ioat.so.6.0 00:03:24.336 SYMLINK libspdk_keyring_linux.so 00:03:24.336 SYMLINK libspdk_scheduler_gscheduler.so 00:03:24.336 SYMLINK libspdk_keyring_file.so 00:03:24.336 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:24.336 SO libspdk_accel_iaa.so.3.0 00:03:24.336 SYMLINK libspdk_scheduler_dynamic.so 00:03:24.336 SYMLINK libspdk_accel_ioat.so 00:03:24.336 LIB libspdk_accel_error.a 00:03:24.336 LIB libspdk_accel_dsa.a 00:03:24.336 SYMLINK libspdk_accel_iaa.so 00:03:24.336 SO libspdk_accel_error.so.2.0 00:03:24.336 SO libspdk_accel_dsa.so.5.0 00:03:24.595 SYMLINK libspdk_accel_error.so 00:03:24.595 SYMLINK libspdk_accel_dsa.so 00:03:24.595 LIB libspdk_blob_bdev.a 00:03:24.595 SO libspdk_blob_bdev.so.11.0 00:03:24.595 SYMLINK libspdk_blob_bdev.so 00:03:24.595 LIB libspdk_vfu_device.a 00:03:24.595 SO libspdk_vfu_device.so.3.0 00:03:24.854 SYMLINK libspdk_vfu_device.so 00:03:24.854 LIB libspdk_fsdev_aio.a 00:03:24.854 CC module/bdev/gpt/gpt.o 00:03:24.854 CC module/bdev/gpt/vbdev_gpt.o 00:03:24.854 CC module/bdev/delay/vbdev_delay.o 00:03:24.854 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:24.854 CC module/bdev/lvol/vbdev_lvol.o 00:03:24.854 CC module/blobfs/bdev/blobfs_bdev.o 00:03:24.854 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:24.854 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:24.854 CC module/bdev/error/vbdev_error.o 00:03:24.854 CC module/bdev/error/vbdev_error_rpc.o 00:03:24.854 CC module/bdev/passthru/vbdev_passthru.o 00:03:24.854 CC module/bdev/malloc/bdev_malloc.o 00:03:24.854 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:24.854 CC module/bdev/null/bdev_null.o 00:03:24.854 CC module/bdev/split/vbdev_split.o 00:03:24.854 CC module/bdev/aio/bdev_aio.o 00:03:24.854 CC module/bdev/nvme/bdev_nvme.o 00:03:24.854 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:24.854 CC module/bdev/null/bdev_null_rpc.o 00:03:24.854 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:24.854 CC module/bdev/split/vbdev_split_rpc.o 00:03:24.854 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:24.854 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:24.854 CC module/bdev/aio/bdev_aio_rpc.o 00:03:24.854 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:24.854 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:24.854 CC module/bdev/iscsi/bdev_iscsi.o 00:03:24.854 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:24.854 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:24.854 CC module/bdev/raid/bdev_raid.o 00:03:24.854 CC module/bdev/nvme/nvme_rpc.o 00:03:24.854 CC module/bdev/raid/bdev_raid_rpc.o 00:03:24.854 CC module/bdev/nvme/bdev_mdns_client.o 00:03:24.854 CC module/bdev/raid/bdev_raid_sb.o 00:03:24.854 CC module/bdev/nvme/vbdev_opal.o 00:03:24.854 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:24.854 CC module/bdev/raid/raid0.o 00:03:24.854 CC module/bdev/ftl/bdev_ftl.o 00:03:24.854 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:24.854 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:24.854 CC module/bdev/raid/raid1.o 00:03:24.854 CC module/bdev/raid/concat.o 00:03:24.854 SO libspdk_fsdev_aio.so.1.0 00:03:24.854 LIB libspdk_sock_posix.a 00:03:24.854 SYMLINK libspdk_fsdev_aio.so 00:03:25.113 SO libspdk_sock_posix.so.6.0 00:03:25.113 SYMLINK libspdk_sock_posix.so 00:03:25.113 LIB libspdk_blobfs_bdev.a 00:03:25.113 SO libspdk_blobfs_bdev.so.6.0 00:03:25.371 LIB libspdk_bdev_gpt.a 00:03:25.371 SO libspdk_bdev_gpt.so.6.0 00:03:25.372 SYMLINK libspdk_blobfs_bdev.so 00:03:25.372 LIB libspdk_bdev_ftl.a 00:03:25.372 LIB libspdk_bdev_split.a 00:03:25.372 SYMLINK libspdk_bdev_gpt.so 00:03:25.372 SO libspdk_bdev_ftl.so.6.0 00:03:25.372 SO libspdk_bdev_split.so.6.0 00:03:25.372 LIB libspdk_bdev_null.a 00:03:25.372 LIB libspdk_bdev_error.a 00:03:25.372 SO libspdk_bdev_error.so.6.0 00:03:25.372 SO libspdk_bdev_null.so.6.0 00:03:25.372 SYMLINK libspdk_bdev_ftl.so 00:03:25.372 SYMLINK libspdk_bdev_split.so 00:03:25.372 LIB libspdk_bdev_aio.a 00:03:25.372 LIB libspdk_bdev_iscsi.a 00:03:25.372 LIB libspdk_bdev_passthru.a 00:03:25.372 SO libspdk_bdev_aio.so.6.0 00:03:25.372 LIB libspdk_bdev_zone_block.a 00:03:25.372 SYMLINK libspdk_bdev_error.so 00:03:25.372 SYMLINK libspdk_bdev_null.so 00:03:25.372 LIB libspdk_bdev_malloc.a 00:03:25.372 SO libspdk_bdev_iscsi.so.6.0 00:03:25.372 SO libspdk_bdev_passthru.so.6.0 00:03:25.372 SO libspdk_bdev_zone_block.so.6.0 00:03:25.372 LIB libspdk_bdev_delay.a 00:03:25.372 SO libspdk_bdev_malloc.so.6.0 00:03:25.372 SO libspdk_bdev_delay.so.6.0 00:03:25.372 SYMLINK libspdk_bdev_aio.so 00:03:25.631 SYMLINK libspdk_bdev_passthru.so 00:03:25.631 SYMLINK libspdk_bdev_iscsi.so 00:03:25.631 SYMLINK libspdk_bdev_zone_block.so 00:03:25.631 SYMLINK libspdk_bdev_malloc.so 00:03:25.631 SYMLINK libspdk_bdev_delay.so 00:03:25.631 LIB libspdk_bdev_lvol.a 00:03:25.631 LIB libspdk_bdev_virtio.a 00:03:25.631 SO libspdk_bdev_lvol.so.6.0 00:03:25.631 SO libspdk_bdev_virtio.so.6.0 00:03:25.631 SYMLINK libspdk_bdev_lvol.so 00:03:25.631 SYMLINK libspdk_bdev_virtio.so 00:03:26.200 LIB libspdk_bdev_raid.a 00:03:26.200 SO libspdk_bdev_raid.so.6.0 00:03:26.200 SYMLINK libspdk_bdev_raid.so 00:03:27.583 LIB libspdk_bdev_nvme.a 00:03:27.583 SO libspdk_bdev_nvme.so.7.1 00:03:27.583 SYMLINK libspdk_bdev_nvme.so 00:03:27.841 CC module/event/subsystems/sock/sock.o 00:03:27.841 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:27.841 CC module/event/subsystems/vmd/vmd.o 00:03:27.841 CC module/event/subsystems/keyring/keyring.o 00:03:27.841 CC module/event/subsystems/iobuf/iobuf.o 00:03:27.841 CC module/event/subsystems/fsdev/fsdev.o 00:03:27.841 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:27.841 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:27.841 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:27.841 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.099 LIB libspdk_event_keyring.a 00:03:28.099 LIB libspdk_event_vhost_blk.a 00:03:28.099 LIB libspdk_event_fsdev.a 00:03:28.099 LIB libspdk_event_vfu_tgt.a 00:03:28.099 LIB libspdk_event_vmd.a 00:03:28.099 LIB libspdk_event_scheduler.a 00:03:28.099 LIB libspdk_event_sock.a 00:03:28.099 SO libspdk_event_keyring.so.1.0 00:03:28.099 SO libspdk_event_vhost_blk.so.3.0 00:03:28.099 LIB libspdk_event_iobuf.a 00:03:28.099 SO libspdk_event_fsdev.so.1.0 00:03:28.099 SO libspdk_event_vfu_tgt.so.3.0 00:03:28.099 SO libspdk_event_vmd.so.6.0 00:03:28.099 SO libspdk_event_scheduler.so.4.0 00:03:28.099 SO libspdk_event_sock.so.5.0 00:03:28.099 SO libspdk_event_iobuf.so.3.0 00:03:28.099 SYMLINK libspdk_event_keyring.so 00:03:28.099 SYMLINK libspdk_event_vhost_blk.so 00:03:28.099 SYMLINK libspdk_event_fsdev.so 00:03:28.099 SYMLINK libspdk_event_vfu_tgt.so 00:03:28.099 SYMLINK libspdk_event_scheduler.so 00:03:28.099 SYMLINK libspdk_event_sock.so 00:03:28.099 SYMLINK libspdk_event_vmd.so 00:03:28.099 SYMLINK libspdk_event_iobuf.so 00:03:28.356 CC module/event/subsystems/accel/accel.o 00:03:28.357 LIB libspdk_event_accel.a 00:03:28.613 SO libspdk_event_accel.so.6.0 00:03:28.613 SYMLINK libspdk_event_accel.so 00:03:28.613 CC module/event/subsystems/bdev/bdev.o 00:03:28.871 LIB libspdk_event_bdev.a 00:03:28.871 SO libspdk_event_bdev.so.6.0 00:03:28.871 SYMLINK libspdk_event_bdev.so 00:03:29.128 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:29.128 CC module/event/subsystems/scsi/scsi.o 00:03:29.128 CC module/event/subsystems/nbd/nbd.o 00:03:29.128 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:29.128 CC module/event/subsystems/ublk/ublk.o 00:03:29.386 LIB libspdk_event_nbd.a 00:03:29.386 LIB libspdk_event_ublk.a 00:03:29.386 LIB libspdk_event_scsi.a 00:03:29.386 SO libspdk_event_ublk.so.3.0 00:03:29.386 SO libspdk_event_nbd.so.6.0 00:03:29.386 SO libspdk_event_scsi.so.6.0 00:03:29.386 SYMLINK libspdk_event_ublk.so 00:03:29.386 SYMLINK libspdk_event_nbd.so 00:03:29.386 SYMLINK libspdk_event_scsi.so 00:03:29.386 LIB libspdk_event_nvmf.a 00:03:29.386 SO libspdk_event_nvmf.so.6.0 00:03:29.386 SYMLINK libspdk_event_nvmf.so 00:03:29.644 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:29.644 CC module/event/subsystems/iscsi/iscsi.o 00:03:29.644 LIB libspdk_event_vhost_scsi.a 00:03:29.644 SO libspdk_event_vhost_scsi.so.3.0 00:03:29.644 LIB libspdk_event_iscsi.a 00:03:29.644 SO libspdk_event_iscsi.so.6.0 00:03:29.644 SYMLINK libspdk_event_vhost_scsi.so 00:03:29.901 SYMLINK libspdk_event_iscsi.so 00:03:29.901 SO libspdk.so.6.0 00:03:29.901 SYMLINK libspdk.so 00:03:30.166 CXX app/trace/trace.o 00:03:30.166 CC app/trace_record/trace_record.o 00:03:30.166 CC app/spdk_nvme_discover/discovery_aer.o 00:03:30.166 CC app/spdk_top/spdk_top.o 00:03:30.166 CC app/spdk_nvme_perf/perf.o 00:03:30.166 CC app/spdk_nvme_identify/identify.o 00:03:30.166 TEST_HEADER include/spdk/accel.h 00:03:30.166 TEST_HEADER include/spdk/accel_module.h 00:03:30.166 TEST_HEADER include/spdk/assert.h 00:03:30.166 CC test/rpc_client/rpc_client_test.o 00:03:30.166 TEST_HEADER include/spdk/barrier.h 00:03:30.166 TEST_HEADER include/spdk/base64.h 00:03:30.166 TEST_HEADER include/spdk/bdev_module.h 00:03:30.166 TEST_HEADER include/spdk/bdev.h 00:03:30.166 TEST_HEADER include/spdk/bdev_zone.h 00:03:30.166 TEST_HEADER include/spdk/bit_array.h 00:03:30.166 TEST_HEADER include/spdk/blob_bdev.h 00:03:30.166 TEST_HEADER include/spdk/bit_pool.h 00:03:30.166 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:30.166 CC app/spdk_lspci/spdk_lspci.o 00:03:30.166 TEST_HEADER include/spdk/blobfs.h 00:03:30.166 TEST_HEADER include/spdk/conf.h 00:03:30.166 TEST_HEADER include/spdk/blob.h 00:03:30.166 TEST_HEADER include/spdk/config.h 00:03:30.166 TEST_HEADER include/spdk/cpuset.h 00:03:30.166 TEST_HEADER include/spdk/crc16.h 00:03:30.166 TEST_HEADER include/spdk/crc32.h 00:03:30.166 TEST_HEADER include/spdk/crc64.h 00:03:30.166 TEST_HEADER include/spdk/dif.h 00:03:30.166 TEST_HEADER include/spdk/dma.h 00:03:30.166 TEST_HEADER include/spdk/endian.h 00:03:30.166 TEST_HEADER include/spdk/env_dpdk.h 00:03:30.166 TEST_HEADER include/spdk/env.h 00:03:30.166 TEST_HEADER include/spdk/event.h 00:03:30.166 TEST_HEADER include/spdk/fd_group.h 00:03:30.166 TEST_HEADER include/spdk/fd.h 00:03:30.166 TEST_HEADER include/spdk/file.h 00:03:30.166 TEST_HEADER include/spdk/fsdev.h 00:03:30.166 TEST_HEADER include/spdk/fsdev_module.h 00:03:30.166 TEST_HEADER include/spdk/ftl.h 00:03:30.166 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:30.166 TEST_HEADER include/spdk/gpt_spec.h 00:03:30.166 TEST_HEADER include/spdk/hexlify.h 00:03:30.166 TEST_HEADER include/spdk/histogram_data.h 00:03:30.166 TEST_HEADER include/spdk/idxd.h 00:03:30.166 TEST_HEADER include/spdk/idxd_spec.h 00:03:30.166 TEST_HEADER include/spdk/init.h 00:03:30.166 TEST_HEADER include/spdk/ioat.h 00:03:30.166 TEST_HEADER include/spdk/ioat_spec.h 00:03:30.166 TEST_HEADER include/spdk/iscsi_spec.h 00:03:30.166 TEST_HEADER include/spdk/json.h 00:03:30.166 TEST_HEADER include/spdk/jsonrpc.h 00:03:30.166 TEST_HEADER include/spdk/keyring.h 00:03:30.166 TEST_HEADER include/spdk/keyring_module.h 00:03:30.166 TEST_HEADER include/spdk/likely.h 00:03:30.166 TEST_HEADER include/spdk/log.h 00:03:30.166 TEST_HEADER include/spdk/lvol.h 00:03:30.166 TEST_HEADER include/spdk/md5.h 00:03:30.166 TEST_HEADER include/spdk/memory.h 00:03:30.166 TEST_HEADER include/spdk/nbd.h 00:03:30.166 TEST_HEADER include/spdk/mmio.h 00:03:30.166 TEST_HEADER include/spdk/net.h 00:03:30.166 TEST_HEADER include/spdk/notify.h 00:03:30.166 TEST_HEADER include/spdk/nvme.h 00:03:30.166 TEST_HEADER include/spdk/nvme_intel.h 00:03:30.166 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:30.166 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:30.166 TEST_HEADER include/spdk/nvme_spec.h 00:03:30.166 TEST_HEADER include/spdk/nvme_zns.h 00:03:30.166 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:30.166 TEST_HEADER include/spdk/nvmf.h 00:03:30.166 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:30.166 TEST_HEADER include/spdk/nvmf_spec.h 00:03:30.166 TEST_HEADER include/spdk/nvmf_transport.h 00:03:30.166 TEST_HEADER include/spdk/opal_spec.h 00:03:30.166 TEST_HEADER include/spdk/opal.h 00:03:30.166 TEST_HEADER include/spdk/pci_ids.h 00:03:30.166 TEST_HEADER include/spdk/pipe.h 00:03:30.166 TEST_HEADER include/spdk/queue.h 00:03:30.166 TEST_HEADER include/spdk/reduce.h 00:03:30.166 TEST_HEADER include/spdk/rpc.h 00:03:30.166 TEST_HEADER include/spdk/scheduler.h 00:03:30.166 TEST_HEADER include/spdk/scsi_spec.h 00:03:30.166 TEST_HEADER include/spdk/scsi.h 00:03:30.166 TEST_HEADER include/spdk/sock.h 00:03:30.166 TEST_HEADER include/spdk/stdinc.h 00:03:30.166 TEST_HEADER include/spdk/string.h 00:03:30.166 TEST_HEADER include/spdk/thread.h 00:03:30.166 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:30.166 TEST_HEADER include/spdk/trace.h 00:03:30.166 TEST_HEADER include/spdk/trace_parser.h 00:03:30.166 TEST_HEADER include/spdk/tree.h 00:03:30.166 TEST_HEADER include/spdk/ublk.h 00:03:30.166 TEST_HEADER include/spdk/util.h 00:03:30.166 TEST_HEADER include/spdk/version.h 00:03:30.166 TEST_HEADER include/spdk/uuid.h 00:03:30.166 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:30.166 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:30.166 TEST_HEADER include/spdk/vhost.h 00:03:30.166 TEST_HEADER include/spdk/xor.h 00:03:30.166 TEST_HEADER include/spdk/vmd.h 00:03:30.166 TEST_HEADER include/spdk/zipf.h 00:03:30.166 CXX test/cpp_headers/accel.o 00:03:30.166 CXX test/cpp_headers/accel_module.o 00:03:30.166 CXX test/cpp_headers/assert.o 00:03:30.166 CXX test/cpp_headers/barrier.o 00:03:30.166 CXX test/cpp_headers/base64.o 00:03:30.166 CXX test/cpp_headers/bdev.o 00:03:30.166 CC app/spdk_dd/spdk_dd.o 00:03:30.166 CXX test/cpp_headers/bdev_module.o 00:03:30.166 CXX test/cpp_headers/bdev_zone.o 00:03:30.166 CXX test/cpp_headers/bit_array.o 00:03:30.166 CXX test/cpp_headers/bit_pool.o 00:03:30.166 CXX test/cpp_headers/blob_bdev.o 00:03:30.166 CXX test/cpp_headers/blobfs_bdev.o 00:03:30.166 CXX test/cpp_headers/blobfs.o 00:03:30.166 CXX test/cpp_headers/blob.o 00:03:30.166 CXX test/cpp_headers/conf.o 00:03:30.166 CXX test/cpp_headers/config.o 00:03:30.166 CXX test/cpp_headers/cpuset.o 00:03:30.166 CXX test/cpp_headers/crc16.o 00:03:30.166 CC app/iscsi_tgt/iscsi_tgt.o 00:03:30.166 CC app/nvmf_tgt/nvmf_main.o 00:03:30.166 CXX test/cpp_headers/crc32.o 00:03:30.166 CC examples/ioat/perf/perf.o 00:03:30.166 CC test/thread/poller_perf/poller_perf.o 00:03:30.166 CC app/spdk_tgt/spdk_tgt.o 00:03:30.166 CC test/app/histogram_perf/histogram_perf.o 00:03:30.166 CC examples/util/zipf/zipf.o 00:03:30.166 CC examples/ioat/verify/verify.o 00:03:30.166 CC app/fio/nvme/fio_plugin.o 00:03:30.166 CC test/app/stub/stub.o 00:03:30.166 CC test/env/vtophys/vtophys.o 00:03:30.166 CC test/env/pci/pci_ut.o 00:03:30.166 CC test/app/jsoncat/jsoncat.o 00:03:30.166 CC test/env/memory/memory_ut.o 00:03:30.166 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:30.425 CC test/dma/test_dma/test_dma.o 00:03:30.425 CC app/fio/bdev/fio_plugin.o 00:03:30.425 CC test/app/bdev_svc/bdev_svc.o 00:03:30.425 CC test/env/mem_callbacks/mem_callbacks.o 00:03:30.425 LINK spdk_lspci 00:03:30.425 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:30.425 LINK rpc_client_test 00:03:30.425 LINK spdk_nvme_discover 00:03:30.684 LINK histogram_perf 00:03:30.684 LINK poller_perf 00:03:30.684 LINK interrupt_tgt 00:03:30.684 LINK jsoncat 00:03:30.684 CXX test/cpp_headers/crc64.o 00:03:30.684 LINK vtophys 00:03:30.684 CXX test/cpp_headers/dif.o 00:03:30.684 CXX test/cpp_headers/dma.o 00:03:30.684 CXX test/cpp_headers/endian.o 00:03:30.684 LINK zipf 00:03:30.684 LINK nvmf_tgt 00:03:30.684 CXX test/cpp_headers/env_dpdk.o 00:03:30.684 CXX test/cpp_headers/env.o 00:03:30.684 CXX test/cpp_headers/event.o 00:03:30.684 CXX test/cpp_headers/fd_group.o 00:03:30.684 CXX test/cpp_headers/fd.o 00:03:30.684 CXX test/cpp_headers/file.o 00:03:30.684 LINK env_dpdk_post_init 00:03:30.684 LINK spdk_trace_record 00:03:30.684 LINK iscsi_tgt 00:03:30.684 CXX test/cpp_headers/fsdev.o 00:03:30.684 CXX test/cpp_headers/fsdev_module.o 00:03:30.684 LINK stub 00:03:30.684 CXX test/cpp_headers/ftl.o 00:03:30.684 LINK verify 00:03:30.684 CXX test/cpp_headers/fuse_dispatcher.o 00:03:30.684 CXX test/cpp_headers/gpt_spec.o 00:03:30.684 CXX test/cpp_headers/hexlify.o 00:03:30.684 LINK spdk_tgt 00:03:30.684 LINK ioat_perf 00:03:30.684 LINK bdev_svc 00:03:30.684 CXX test/cpp_headers/histogram_data.o 00:03:30.684 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:30.684 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:30.684 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:30.948 CXX test/cpp_headers/idxd.o 00:03:30.948 CXX test/cpp_headers/idxd_spec.o 00:03:30.948 CXX test/cpp_headers/init.o 00:03:30.948 CXX test/cpp_headers/ioat.o 00:03:30.948 CXX test/cpp_headers/ioat_spec.o 00:03:30.948 LINK spdk_dd 00:03:30.948 CXX test/cpp_headers/iscsi_spec.o 00:03:30.948 CXX test/cpp_headers/json.o 00:03:30.948 CXX test/cpp_headers/jsonrpc.o 00:03:30.948 LINK spdk_trace 00:03:30.948 CXX test/cpp_headers/keyring.o 00:03:30.948 CXX test/cpp_headers/keyring_module.o 00:03:30.948 CXX test/cpp_headers/likely.o 00:03:30.948 LINK pci_ut 00:03:30.948 CXX test/cpp_headers/log.o 00:03:30.948 CXX test/cpp_headers/lvol.o 00:03:30.948 CXX test/cpp_headers/md5.o 00:03:30.948 CXX test/cpp_headers/memory.o 00:03:30.948 CXX test/cpp_headers/mmio.o 00:03:31.210 CXX test/cpp_headers/nbd.o 00:03:31.210 CXX test/cpp_headers/net.o 00:03:31.210 CXX test/cpp_headers/notify.o 00:03:31.210 CXX test/cpp_headers/nvme.o 00:03:31.210 CXX test/cpp_headers/nvme_intel.o 00:03:31.210 CXX test/cpp_headers/nvme_ocssd.o 00:03:31.210 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:31.210 CXX test/cpp_headers/nvme_spec.o 00:03:31.210 CXX test/cpp_headers/nvme_zns.o 00:03:31.210 CXX test/cpp_headers/nvmf_cmd.o 00:03:31.210 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:31.210 CC test/event/event_perf/event_perf.o 00:03:31.210 CXX test/cpp_headers/nvmf.o 00:03:31.210 CXX test/cpp_headers/nvmf_spec.o 00:03:31.210 CC test/event/reactor/reactor.o 00:03:31.210 CC test/event/reactor_perf/reactor_perf.o 00:03:31.210 CXX test/cpp_headers/nvmf_transport.o 00:03:31.210 CXX test/cpp_headers/opal.o 00:03:31.210 CC test/event/app_repeat/app_repeat.o 00:03:31.210 LINK nvme_fuzz 00:03:31.210 CXX test/cpp_headers/opal_spec.o 00:03:31.210 CC examples/vmd/lsvmd/lsvmd.o 00:03:31.210 CC examples/sock/hello_world/hello_sock.o 00:03:31.210 LINK spdk_bdev 00:03:31.472 CXX test/cpp_headers/pci_ids.o 00:03:31.472 CC examples/idxd/perf/perf.o 00:03:31.472 CC examples/thread/thread/thread_ex.o 00:03:31.472 CC test/event/scheduler/scheduler.o 00:03:31.472 LINK spdk_nvme 00:03:31.472 LINK test_dma 00:03:31.472 CXX test/cpp_headers/pipe.o 00:03:31.472 CXX test/cpp_headers/queue.o 00:03:31.472 CXX test/cpp_headers/reduce.o 00:03:31.472 CXX test/cpp_headers/rpc.o 00:03:31.472 CXX test/cpp_headers/scheduler.o 00:03:31.472 CXX test/cpp_headers/scsi.o 00:03:31.472 CXX test/cpp_headers/scsi_spec.o 00:03:31.472 CXX test/cpp_headers/sock.o 00:03:31.472 CXX test/cpp_headers/stdinc.o 00:03:31.472 CXX test/cpp_headers/string.o 00:03:31.472 CXX test/cpp_headers/thread.o 00:03:31.472 CXX test/cpp_headers/trace.o 00:03:31.472 CXX test/cpp_headers/trace_parser.o 00:03:31.472 CC examples/vmd/led/led.o 00:03:31.472 CXX test/cpp_headers/tree.o 00:03:31.472 LINK event_perf 00:03:31.472 CXX test/cpp_headers/ublk.o 00:03:31.472 LINK reactor 00:03:31.472 CXX test/cpp_headers/util.o 00:03:31.472 CXX test/cpp_headers/uuid.o 00:03:31.472 CXX test/cpp_headers/version.o 00:03:31.733 CXX test/cpp_headers/vfio_user_pci.o 00:03:31.733 LINK reactor_perf 00:03:31.733 LINK lsvmd 00:03:31.733 CXX test/cpp_headers/vfio_user_spec.o 00:03:31.733 CXX test/cpp_headers/vhost.o 00:03:31.733 CXX test/cpp_headers/vmd.o 00:03:31.733 LINK app_repeat 00:03:31.733 CC app/vhost/vhost.o 00:03:31.733 LINK vhost_fuzz 00:03:31.733 CXX test/cpp_headers/xor.o 00:03:31.733 CXX test/cpp_headers/zipf.o 00:03:31.733 LINK mem_callbacks 00:03:31.733 LINK spdk_nvme_perf 00:03:31.733 LINK spdk_nvme_identify 00:03:31.733 LINK scheduler 00:03:31.733 LINK hello_sock 00:03:31.992 LINK spdk_top 00:03:31.992 LINK led 00:03:31.992 LINK thread 00:03:31.992 LINK vhost 00:03:31.992 CC test/nvme/overhead/overhead.o 00:03:31.992 CC test/nvme/e2edp/nvme_dp.o 00:03:31.992 CC test/nvme/fused_ordering/fused_ordering.o 00:03:31.993 CC test/nvme/reset/reset.o 00:03:31.993 CC test/nvme/startup/startup.o 00:03:31.993 CC test/nvme/aer/aer.o 00:03:31.993 CC test/nvme/sgl/sgl.o 00:03:31.993 CC test/nvme/connect_stress/connect_stress.o 00:03:31.993 CC test/nvme/compliance/nvme_compliance.o 00:03:31.993 LINK idxd_perf 00:03:31.993 CC test/nvme/reserve/reserve.o 00:03:31.993 CC test/nvme/simple_copy/simple_copy.o 00:03:31.993 CC test/nvme/boot_partition/boot_partition.o 00:03:31.993 CC test/nvme/err_injection/err_injection.o 00:03:31.993 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:31.993 CC test/nvme/cuse/cuse.o 00:03:31.993 CC test/nvme/fdp/fdp.o 00:03:31.993 CC test/accel/dif/dif.o 00:03:31.993 CC test/blobfs/mkfs/mkfs.o 00:03:31.993 CC test/lvol/esnap/esnap.o 00:03:32.251 CC examples/nvme/arbitration/arbitration.o 00:03:32.251 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:32.251 LINK boot_partition 00:03:32.251 CC examples/nvme/hello_world/hello_world.o 00:03:32.251 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:32.251 CC examples/nvme/hotplug/hotplug.o 00:03:32.251 CC examples/nvme/reconnect/reconnect.o 00:03:32.251 CC examples/nvme/abort/abort.o 00:03:32.251 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:32.251 LINK doorbell_aers 00:03:32.251 LINK err_injection 00:03:32.251 LINK simple_copy 00:03:32.251 LINK startup 00:03:32.251 LINK nvme_dp 00:03:32.251 CC examples/accel/perf/accel_perf.o 00:03:32.251 LINK mkfs 00:03:32.509 LINK connect_stress 00:03:32.509 LINK overhead 00:03:32.509 LINK reserve 00:03:32.509 LINK fused_ordering 00:03:32.509 LINK nvme_compliance 00:03:32.509 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:32.509 CC examples/blob/hello_world/hello_blob.o 00:03:32.509 LINK reset 00:03:32.509 CC examples/blob/cli/blobcli.o 00:03:32.509 LINK fdp 00:03:32.509 LINK aer 00:03:32.509 LINK sgl 00:03:32.509 LINK cmb_copy 00:03:32.509 LINK memory_ut 00:03:32.509 LINK pmr_persistence 00:03:32.767 LINK hello_world 00:03:32.767 LINK hotplug 00:03:32.767 LINK arbitration 00:03:32.767 LINK hello_fsdev 00:03:32.767 LINK reconnect 00:03:32.767 LINK hello_blob 00:03:32.767 LINK abort 00:03:33.025 LINK accel_perf 00:03:33.025 LINK dif 00:03:33.025 LINK nvme_manage 00:03:33.025 LINK blobcli 00:03:33.283 LINK iscsi_fuzz 00:03:33.283 CC examples/bdev/hello_world/hello_bdev.o 00:03:33.283 CC examples/bdev/bdevperf/bdevperf.o 00:03:33.283 CC test/bdev/bdevio/bdevio.o 00:03:33.541 LINK hello_bdev 00:03:33.541 LINK cuse 00:03:33.799 LINK bdevio 00:03:34.057 LINK bdevperf 00:03:34.624 CC examples/nvmf/nvmf/nvmf.o 00:03:34.882 LINK nvmf 00:03:37.413 LINK esnap 00:03:37.672 00:03:37.672 real 1m10.871s 00:03:37.672 user 11m49.970s 00:03:37.672 sys 2m36.497s 00:03:37.672 12:15:10 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:37.672 12:15:10 make -- common/autotest_common.sh@10 -- $ set +x 00:03:37.672 ************************************ 00:03:37.672 END TEST make 00:03:37.672 ************************************ 00:03:37.932 12:15:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:37.932 12:15:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:37.932 12:15:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:37.932 12:15:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.932 12:15:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:37.932 12:15:10 -- pm/common@44 -- $ pid=411809 00:03:37.932 12:15:10 -- pm/common@50 -- $ kill -TERM 411809 00:03:37.932 12:15:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.932 12:15:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:37.932 12:15:10 -- pm/common@44 -- $ pid=411811 00:03:37.932 12:15:10 -- pm/common@50 -- $ kill -TERM 411811 00:03:37.932 12:15:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.932 12:15:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:37.932 12:15:10 -- pm/common@44 -- $ pid=411812 00:03:37.932 12:15:10 -- pm/common@50 -- $ kill -TERM 411812 00:03:37.932 12:15:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.932 12:15:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:37.932 12:15:10 -- pm/common@44 -- $ pid=411844 00:03:37.932 12:15:10 -- pm/common@50 -- $ sudo -E kill -TERM 411844 00:03:37.932 12:15:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:37.932 12:15:10 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:37.932 12:15:10 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:37.932 12:15:10 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:37.932 12:15:10 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:37.932 12:15:10 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:37.932 12:15:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.932 12:15:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.932 12:15:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.932 12:15:10 -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.932 12:15:10 -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.932 12:15:10 -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.932 12:15:10 -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.932 12:15:10 -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.932 12:15:10 -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.932 12:15:10 -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.932 12:15:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.932 12:15:10 -- scripts/common.sh@344 -- # case "$op" in 00:03:37.932 12:15:10 -- scripts/common.sh@345 -- # : 1 00:03:37.932 12:15:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.932 12:15:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.932 12:15:10 -- scripts/common.sh@365 -- # decimal 1 00:03:37.932 12:15:10 -- scripts/common.sh@353 -- # local d=1 00:03:37.932 12:15:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.932 12:15:10 -- scripts/common.sh@355 -- # echo 1 00:03:37.932 12:15:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.932 12:15:10 -- scripts/common.sh@366 -- # decimal 2 00:03:37.932 12:15:10 -- scripts/common.sh@353 -- # local d=2 00:03:37.932 12:15:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.932 12:15:10 -- scripts/common.sh@355 -- # echo 2 00:03:37.932 12:15:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.932 12:15:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.932 12:15:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.932 12:15:10 -- scripts/common.sh@368 -- # return 0 00:03:37.932 12:15:10 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.932 12:15:10 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:37.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.932 --rc genhtml_branch_coverage=1 00:03:37.932 --rc genhtml_function_coverage=1 00:03:37.932 --rc genhtml_legend=1 00:03:37.932 --rc geninfo_all_blocks=1 00:03:37.932 --rc geninfo_unexecuted_blocks=1 00:03:37.932 00:03:37.932 ' 00:03:37.932 12:15:10 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:37.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.932 --rc genhtml_branch_coverage=1 00:03:37.932 --rc genhtml_function_coverage=1 00:03:37.932 --rc genhtml_legend=1 00:03:37.932 --rc geninfo_all_blocks=1 00:03:37.932 --rc geninfo_unexecuted_blocks=1 00:03:37.932 00:03:37.932 ' 00:03:37.932 12:15:10 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:37.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.932 --rc genhtml_branch_coverage=1 00:03:37.932 --rc genhtml_function_coverage=1 00:03:37.932 --rc genhtml_legend=1 00:03:37.932 --rc geninfo_all_blocks=1 00:03:37.932 --rc geninfo_unexecuted_blocks=1 00:03:37.932 00:03:37.932 ' 00:03:37.932 12:15:10 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:37.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.932 --rc genhtml_branch_coverage=1 00:03:37.932 --rc genhtml_function_coverage=1 00:03:37.932 --rc genhtml_legend=1 00:03:37.932 --rc geninfo_all_blocks=1 00:03:37.932 --rc geninfo_unexecuted_blocks=1 00:03:37.932 00:03:37.932 ' 00:03:37.932 12:15:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:37.932 12:15:10 -- nvmf/common.sh@7 -- # uname -s 00:03:37.932 12:15:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.932 12:15:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.932 12:15:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.932 12:15:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.932 12:15:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.932 12:15:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.932 12:15:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.932 12:15:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.932 12:15:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.932 12:15:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:37.932 12:15:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:37.932 12:15:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:37.932 12:15:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:37.932 12:15:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:37.932 12:15:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:37.932 12:15:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:37.932 12:15:10 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:37.932 12:15:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:37.932 12:15:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:37.932 12:15:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:37.932 12:15:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:37.932 12:15:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.932 12:15:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.932 12:15:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.932 12:15:10 -- paths/export.sh@5 -- # export PATH 00:03:37.932 12:15:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.932 12:15:10 -- nvmf/common.sh@51 -- # : 0 00:03:37.932 12:15:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:37.932 12:15:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:37.933 12:15:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:37.933 12:15:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:37.933 12:15:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:37.933 12:15:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:37.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:37.933 12:15:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:37.933 12:15:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:37.933 12:15:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:37.933 12:15:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:37.933 12:15:10 -- spdk/autotest.sh@32 -- # uname -s 00:03:37.933 12:15:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:37.933 12:15:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:37.933 12:15:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:37.933 12:15:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:37.933 12:15:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:37.933 12:15:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:37.933 12:15:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:37.933 12:15:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:37.933 12:15:10 -- spdk/autotest.sh@48 -- # udevadm_pid=471980 00:03:37.933 12:15:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:37.933 12:15:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:37.933 12:15:10 -- pm/common@17 -- # local monitor 00:03:37.933 12:15:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.933 12:15:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.933 12:15:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.933 12:15:10 -- pm/common@21 -- # date +%s 00:03:37.933 12:15:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.933 12:15:10 -- pm/common@21 -- # date +%s 00:03:37.933 12:15:10 -- pm/common@21 -- # date +%s 00:03:37.933 12:15:10 -- pm/common@25 -- # sleep 1 00:03:37.933 12:15:10 -- pm/common@21 -- # date +%s 00:03:37.933 12:15:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730286910 00:03:37.933 12:15:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730286910 00:03:37.933 12:15:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730286910 00:03:37.933 12:15:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730286910 00:03:37.933 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730286910_collect-vmstat.pm.log 00:03:37.933 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730286910_collect-cpu-load.pm.log 00:03:37.933 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730286910_collect-cpu-temp.pm.log 00:03:38.193 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730286910_collect-bmc-pm.bmc.pm.log 00:03:39.133 12:15:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:39.133 12:15:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:39.133 12:15:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:39.133 12:15:11 -- common/autotest_common.sh@10 -- # set +x 00:03:39.133 12:15:11 -- spdk/autotest.sh@59 -- # create_test_list 00:03:39.133 12:15:11 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:39.133 12:15:11 -- common/autotest_common.sh@10 -- # set +x 00:03:39.133 12:15:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:39.133 12:15:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:39.133 12:15:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:39.133 12:15:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:39.133 12:15:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:39.133 12:15:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:39.133 12:15:11 -- common/autotest_common.sh@1455 -- # uname 00:03:39.133 12:15:11 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:39.133 12:15:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:39.133 12:15:11 -- common/autotest_common.sh@1475 -- # uname 00:03:39.133 12:15:11 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:39.133 12:15:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:39.133 12:15:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:39.133 lcov: LCOV version 1.15 00:03:39.133 12:15:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:11.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:11.242 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:16.509 12:15:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:16.509 12:15:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.509 12:15:48 -- common/autotest_common.sh@10 -- # set +x 00:04:16.509 12:15:48 -- spdk/autotest.sh@78 -- # rm -f 00:04:16.509 12:15:48 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.450 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:17.450 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:17.450 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:17.450 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:17.450 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:17.450 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:17.450 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:17.450 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:17.450 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:17.450 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:17.450 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:17.450 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:17.450 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:17.450 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:17.450 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:17.450 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:17.450 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:17.450 12:15:50 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:17.450 12:15:50 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:17.450 12:15:50 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:17.450 12:15:50 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:17.450 12:15:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:17.450 12:15:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:17.450 12:15:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:17.450 12:15:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.450 12:15:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:17.450 12:15:50 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:17.450 12:15:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.450 12:15:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.450 12:15:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:17.451 12:15:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:17.451 12:15:50 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:17.708 No valid GPT data, bailing 00:04:17.708 12:15:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:17.708 12:15:50 -- scripts/common.sh@394 -- # pt= 00:04:17.708 12:15:50 -- scripts/common.sh@395 -- # return 1 00:04:17.708 12:15:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:17.708 1+0 records in 00:04:17.708 1+0 records out 00:04:17.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00212554 s, 493 MB/s 00:04:17.708 12:15:50 -- spdk/autotest.sh@105 -- # sync 00:04:17.708 12:15:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:17.708 12:15:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:17.708 12:15:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:20.245 12:15:52 -- spdk/autotest.sh@111 -- # uname -s 00:04:20.245 12:15:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:20.245 12:15:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:20.245 12:15:52 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:20.816 Hugepages 00:04:20.816 node hugesize free / total 00:04:20.816 node0 1048576kB 0 / 0 00:04:20.816 node0 2048kB 0 / 0 00:04:20.816 node1 1048576kB 0 / 0 00:04:20.816 node1 2048kB 0 / 0 00:04:20.816 00:04:20.816 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:21.076 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:21.076 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:21.076 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:21.076 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:21.076 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:21.076 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:21.076 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:21.076 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:21.076 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:21.076 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:21.076 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:21.076 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:21.076 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:21.076 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:21.076 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:21.076 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:21.076 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:21.076 12:15:53 -- spdk/autotest.sh@117 -- # uname -s 00:04:21.076 12:15:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:21.076 12:15:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:21.076 12:15:53 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.458 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:22.458 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:22.458 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:22.458 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:22.458 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:22.458 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:22.458 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:22.458 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:22.458 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:22.458 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:22.458 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:22.458 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:22.458 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:22.458 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:22.458 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:22.458 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:23.399 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:23.399 12:15:56 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:24.782 12:15:57 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:24.782 12:15:57 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:24.782 12:15:57 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:24.782 12:15:57 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:24.782 12:15:57 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:24.782 12:15:57 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:24.782 12:15:57 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.782 12:15:57 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:24.782 12:15:57 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:24.782 12:15:57 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:24.782 12:15:57 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:04:24.782 12:15:57 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.722 Waiting for block devices as requested 00:04:25.722 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:25.983 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:25.983 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:26.245 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:26.245 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:26.245 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:26.245 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:26.506 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:26.506 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:26.506 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:26.506 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:26.765 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:26.765 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:26.765 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:26.765 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:27.025 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:27.025 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:27.025 12:15:59 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:27.025 12:15:59 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:27.286 12:15:59 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:27.286 12:15:59 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:04:27.286 12:15:59 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:27.286 12:15:59 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:27.286 12:15:59 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:27.286 12:15:59 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:27.286 12:15:59 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:27.286 12:15:59 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:27.286 12:15:59 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:27.286 12:15:59 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:27.286 12:15:59 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:27.286 12:15:59 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:04:27.286 12:15:59 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:27.286 12:15:59 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:27.286 12:15:59 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:27.286 12:15:59 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:27.286 12:15:59 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:27.286 12:15:59 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:27.286 12:15:59 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:27.286 12:15:59 -- common/autotest_common.sh@1541 -- # continue 00:04:27.286 12:15:59 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:27.286 12:15:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.286 12:15:59 -- common/autotest_common.sh@10 -- # set +x 00:04:27.286 12:15:59 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:27.286 12:15:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.286 12:15:59 -- common/autotest_common.sh@10 -- # set +x 00:04:27.286 12:15:59 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.667 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:28.667 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:28.667 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:28.667 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:28.667 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:28.667 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:28.667 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:28.667 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:28.667 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:28.667 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:28.667 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:28.667 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:28.667 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:28.667 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:28.667 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:28.667 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:29.613 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:29.613 12:16:02 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:29.613 12:16:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:29.613 12:16:02 -- common/autotest_common.sh@10 -- # set +x 00:04:29.613 12:16:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:29.613 12:16:02 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:29.613 12:16:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.613 12:16:02 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:29.613 12:16:02 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:29.613 12:16:02 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:29.613 12:16:02 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:29.613 12:16:02 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:29.613 12:16:02 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:29.613 12:16:02 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:29.613 12:16:02 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.613 12:16:02 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:29.613 12:16:02 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:29.613 12:16:02 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:29.613 12:16:02 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:04:29.613 12:16:02 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:29.613 12:16:02 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:29.613 12:16:02 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:29.613 12:16:02 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:29.613 12:16:02 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:29.613 12:16:02 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:29.613 12:16:02 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:04:29.613 12:16:02 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:04:29.613 12:16:02 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=482372 00:04:29.613 12:16:02 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.613 12:16:02 -- common/autotest_common.sh@1583 -- # waitforlisten 482372 00:04:29.613 12:16:02 -- common/autotest_common.sh@833 -- # '[' -z 482372 ']' 00:04:29.613 12:16:02 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.613 12:16:02 -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:29.613 12:16:02 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.613 12:16:02 -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:29.613 12:16:02 -- common/autotest_common.sh@10 -- # set +x 00:04:29.872 [2024-10-30 12:16:02.308505] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:04:29.872 [2024-10-30 12:16:02.308606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482372 ] 00:04:29.872 [2024-10-30 12:16:02.374229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.872 [2024-10-30 12:16:02.430418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.131 12:16:02 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.131 12:16:02 -- common/autotest_common.sh@866 -- # return 0 00:04:30.131 12:16:02 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:30.131 12:16:02 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:30.131 12:16:02 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:33.418 nvme0n1 00:04:33.419 12:16:05 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:33.419 [2024-10-30 12:16:06.033626] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:33.419 [2024-10-30 12:16:06.033677] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:33.419 request: 00:04:33.419 { 00:04:33.419 "nvme_ctrlr_name": "nvme0", 00:04:33.419 "password": "test", 00:04:33.419 "method": "bdev_nvme_opal_revert", 00:04:33.419 "req_id": 1 00:04:33.419 } 00:04:33.419 Got JSON-RPC error response 00:04:33.419 response: 00:04:33.419 { 00:04:33.419 "code": -32603, 00:04:33.419 "message": "Internal error" 00:04:33.419 } 00:04:33.419 12:16:06 -- common/autotest_common.sh@1589 -- # true 00:04:33.419 12:16:06 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:33.419 12:16:06 -- common/autotest_common.sh@1593 -- # killprocess 482372 00:04:33.419 12:16:06 -- common/autotest_common.sh@952 -- # '[' -z 482372 ']' 00:04:33.419 12:16:06 -- common/autotest_common.sh@956 -- # kill -0 482372 00:04:33.419 12:16:06 -- common/autotest_common.sh@957 -- # uname 00:04:33.419 12:16:06 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:33.419 12:16:06 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 482372 00:04:33.419 12:16:06 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:33.419 12:16:06 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:33.419 12:16:06 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 482372' 00:04:33.419 killing process with pid 482372 00:04:33.419 12:16:06 -- common/autotest_common.sh@971 -- # kill 482372 00:04:33.419 12:16:06 -- common/autotest_common.sh@976 -- # wait 482372 00:04:35.321 12:16:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:35.321 12:16:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:35.321 12:16:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:35.321 12:16:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:35.322 12:16:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:35.322 12:16:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.322 12:16:07 -- common/autotest_common.sh@10 -- # set +x 00:04:35.322 12:16:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:35.322 12:16:07 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:35.322 12:16:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.322 12:16:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.322 12:16:07 -- common/autotest_common.sh@10 -- # set +x 00:04:35.322 ************************************ 00:04:35.322 START TEST env 00:04:35.322 ************************************ 00:04:35.322 12:16:07 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:35.322 * Looking for test storage... 00:04:35.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:35.322 12:16:07 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:35.322 12:16:07 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:35.322 12:16:07 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:35.581 12:16:08 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:35.581 12:16:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.581 12:16:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.581 12:16:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.581 12:16:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.581 12:16:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.581 12:16:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.581 12:16:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.581 12:16:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.581 12:16:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.581 12:16:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.581 12:16:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.581 12:16:08 env -- scripts/common.sh@344 -- # case "$op" in 00:04:35.581 12:16:08 env -- scripts/common.sh@345 -- # : 1 00:04:35.581 12:16:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.581 12:16:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.581 12:16:08 env -- scripts/common.sh@365 -- # decimal 1 00:04:35.581 12:16:08 env -- scripts/common.sh@353 -- # local d=1 00:04:35.581 12:16:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.581 12:16:08 env -- scripts/common.sh@355 -- # echo 1 00:04:35.581 12:16:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.581 12:16:08 env -- scripts/common.sh@366 -- # decimal 2 00:04:35.581 12:16:08 env -- scripts/common.sh@353 -- # local d=2 00:04:35.581 12:16:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.581 12:16:08 env -- scripts/common.sh@355 -- # echo 2 00:04:35.581 12:16:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.581 12:16:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.581 12:16:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.581 12:16:08 env -- scripts/common.sh@368 -- # return 0 00:04:35.581 12:16:08 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.581 12:16:08 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:35.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.581 --rc genhtml_branch_coverage=1 00:04:35.581 --rc genhtml_function_coverage=1 00:04:35.581 --rc genhtml_legend=1 00:04:35.581 --rc geninfo_all_blocks=1 00:04:35.581 --rc geninfo_unexecuted_blocks=1 00:04:35.581 00:04:35.581 ' 00:04:35.581 12:16:08 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:35.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.581 --rc genhtml_branch_coverage=1 00:04:35.581 --rc genhtml_function_coverage=1 00:04:35.581 --rc genhtml_legend=1 00:04:35.581 --rc geninfo_all_blocks=1 00:04:35.581 --rc geninfo_unexecuted_blocks=1 00:04:35.581 00:04:35.581 ' 00:04:35.581 12:16:08 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:35.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.581 --rc genhtml_branch_coverage=1 00:04:35.581 --rc genhtml_function_coverage=1 00:04:35.581 --rc genhtml_legend=1 00:04:35.581 --rc geninfo_all_blocks=1 00:04:35.581 --rc geninfo_unexecuted_blocks=1 00:04:35.581 00:04:35.581 ' 00:04:35.581 12:16:08 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:35.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.581 --rc genhtml_branch_coverage=1 00:04:35.581 --rc genhtml_function_coverage=1 00:04:35.581 --rc genhtml_legend=1 00:04:35.581 --rc geninfo_all_blocks=1 00:04:35.581 --rc geninfo_unexecuted_blocks=1 00:04:35.581 00:04:35.581 ' 00:04:35.581 12:16:08 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:35.581 12:16:08 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.581 12:16:08 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.581 12:16:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.581 ************************************ 00:04:35.581 START TEST env_memory 00:04:35.581 ************************************ 00:04:35.581 12:16:08 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:35.581 00:04:35.581 00:04:35.581 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.581 http://cunit.sourceforge.net/ 00:04:35.581 00:04:35.581 00:04:35.581 Suite: memory 00:04:35.581 Test: alloc and free memory map ...[2024-10-30 12:16:08.083814] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:35.581 passed 00:04:35.581 Test: mem map translation ...[2024-10-30 12:16:08.103701] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:35.581 [2024-10-30 12:16:08.103723] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:35.582 [2024-10-30 12:16:08.103772] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:35.582 [2024-10-30 12:16:08.103784] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:35.582 passed 00:04:35.582 Test: mem map registration ...[2024-10-30 12:16:08.144726] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:35.582 [2024-10-30 12:16:08.144746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:35.582 passed 00:04:35.582 Test: mem map adjacent registrations ...passed 00:04:35.582 00:04:35.582 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.582 suites 1 1 n/a 0 0 00:04:35.582 tests 4 4 4 0 0 00:04:35.582 asserts 152 152 152 0 n/a 00:04:35.582 00:04:35.582 Elapsed time = 0.143 seconds 00:04:35.582 00:04:35.582 real 0m0.152s 00:04:35.582 user 0m0.144s 00:04:35.582 sys 0m0.008s 00:04:35.582 12:16:08 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.582 12:16:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:35.582 ************************************ 00:04:35.582 END TEST env_memory 00:04:35.582 ************************************ 00:04:35.582 12:16:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:35.582 12:16:08 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.582 12:16:08 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.582 12:16:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.582 ************************************ 00:04:35.582 START TEST env_vtophys 00:04:35.582 ************************************ 00:04:35.582 12:16:08 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:35.582 EAL: lib.eal log level changed from notice to debug 00:04:35.582 EAL: Detected lcore 0 as core 0 on socket 0 00:04:35.582 EAL: Detected lcore 1 as core 1 on socket 0 00:04:35.582 EAL: Detected lcore 2 as core 2 on socket 0 00:04:35.582 EAL: Detected lcore 3 as core 3 on socket 0 00:04:35.582 EAL: Detected lcore 4 as core 4 on socket 0 00:04:35.582 EAL: Detected lcore 5 as core 5 on socket 0 00:04:35.582 EAL: Detected lcore 6 as core 8 on socket 0 00:04:35.582 EAL: Detected lcore 7 as core 9 on socket 0 00:04:35.582 EAL: Detected lcore 8 as core 10 on socket 0 00:04:35.582 EAL: Detected lcore 9 as core 11 on socket 0 00:04:35.582 EAL: Detected lcore 10 as core 12 on socket 0 00:04:35.582 EAL: Detected lcore 11 as core 13 on socket 0 00:04:35.582 EAL: Detected lcore 12 as core 0 on socket 1 00:04:35.582 EAL: Detected lcore 13 as core 1 on socket 1 00:04:35.582 EAL: Detected lcore 14 as core 2 on socket 1 00:04:35.582 EAL: Detected lcore 15 as core 3 on socket 1 00:04:35.582 EAL: Detected lcore 16 as core 4 on socket 1 00:04:35.582 EAL: Detected lcore 17 as core 5 on socket 1 00:04:35.582 EAL: Detected lcore 18 as core 8 on socket 1 00:04:35.582 EAL: Detected lcore 19 as core 9 on socket 1 00:04:35.582 EAL: Detected lcore 20 as core 10 on socket 1 00:04:35.582 EAL: Detected lcore 21 as core 11 on socket 1 00:04:35.582 EAL: Detected lcore 22 as core 12 on socket 1 00:04:35.582 EAL: Detected lcore 23 as core 13 on socket 1 00:04:35.582 EAL: Detected lcore 24 as core 0 on socket 0 00:04:35.582 EAL: Detected lcore 25 as core 1 on socket 0 00:04:35.582 EAL: Detected lcore 26 as core 2 on socket 0 00:04:35.582 EAL: Detected lcore 27 as core 3 on socket 0 00:04:35.582 EAL: Detected lcore 28 as core 4 on socket 0 00:04:35.582 EAL: Detected lcore 29 as core 5 on socket 0 00:04:35.582 EAL: Detected lcore 30 as core 8 on socket 0 00:04:35.582 EAL: Detected lcore 31 as core 9 on socket 0 00:04:35.582 EAL: Detected lcore 32 as core 10 on socket 0 00:04:35.582 EAL: Detected lcore 33 as core 11 on socket 0 00:04:35.582 EAL: Detected lcore 34 as core 12 on socket 0 00:04:35.582 EAL: Detected lcore 35 as core 13 on socket 0 00:04:35.582 EAL: Detected lcore 36 as core 0 on socket 1 00:04:35.582 EAL: Detected lcore 37 as core 1 on socket 1 00:04:35.582 EAL: Detected lcore 38 as core 2 on socket 1 00:04:35.582 EAL: Detected lcore 39 as core 3 on socket 1 00:04:35.582 EAL: Detected lcore 40 as core 4 on socket 1 00:04:35.582 EAL: Detected lcore 41 as core 5 on socket 1 00:04:35.582 EAL: Detected lcore 42 as core 8 on socket 1 00:04:35.582 EAL: Detected lcore 43 as core 9 on socket 1 00:04:35.582 EAL: Detected lcore 44 as core 10 on socket 1 00:04:35.582 EAL: Detected lcore 45 as core 11 on socket 1 00:04:35.582 EAL: Detected lcore 46 as core 12 on socket 1 00:04:35.582 EAL: Detected lcore 47 as core 13 on socket 1 00:04:35.840 EAL: Maximum logical cores by configuration: 128 00:04:35.840 EAL: Detected CPU lcores: 48 00:04:35.840 EAL: Detected NUMA nodes: 2 00:04:35.840 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:35.841 EAL: Detected shared linkage of DPDK 00:04:35.841 EAL: No shared files mode enabled, IPC will be disabled 00:04:35.841 EAL: Bus pci wants IOVA as 'DC' 00:04:35.841 EAL: Buses did not request a specific IOVA mode. 00:04:35.841 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:35.841 EAL: Selected IOVA mode 'VA' 00:04:35.841 EAL: Probing VFIO support... 00:04:35.841 EAL: IOMMU type 1 (Type 1) is supported 00:04:35.841 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:35.841 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:35.841 EAL: VFIO support initialized 00:04:35.841 EAL: Ask a virtual area of 0x2e000 bytes 00:04:35.841 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:35.841 EAL: Setting up physically contiguous memory... 00:04:35.841 EAL: Setting maximum number of open files to 524288 00:04:35.841 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:35.841 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:35.841 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:35.841 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.841 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:35.841 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.841 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.841 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:35.841 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:35.841 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.841 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:35.841 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.841 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.841 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:35.841 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:35.841 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.841 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:35.841 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.841 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.841 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:35.841 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:35.841 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.841 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:35.841 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.841 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.841 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:35.841 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:35.841 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:35.841 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.841 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:35.841 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.841 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.841 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:35.841 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:35.841 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.841 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:35.841 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.841 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.841 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:35.841 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:35.841 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.841 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:35.841 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.841 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.841 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:35.841 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:35.841 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.841 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:35.841 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.841 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.841 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:35.841 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:35.841 EAL: Hugepages will be freed exactly as allocated. 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: TSC frequency is ~2700000 KHz 00:04:35.841 EAL: Main lcore 0 is ready (tid=7f1baa49ca00;cpuset=[0]) 00:04:35.841 EAL: Trying to obtain current memory policy. 00:04:35.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.841 EAL: Restoring previous memory policy: 0 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was expanded by 2MB 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:35.841 EAL: Mem event callback 'spdk:(nil)' registered 00:04:35.841 00:04:35.841 00:04:35.841 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.841 http://cunit.sourceforge.net/ 00:04:35.841 00:04:35.841 00:04:35.841 Suite: components_suite 00:04:35.841 Test: vtophys_malloc_test ...passed 00:04:35.841 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:35.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.841 EAL: Restoring previous memory policy: 4 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was expanded by 4MB 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was shrunk by 4MB 00:04:35.841 EAL: Trying to obtain current memory policy. 00:04:35.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.841 EAL: Restoring previous memory policy: 4 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was expanded by 6MB 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was shrunk by 6MB 00:04:35.841 EAL: Trying to obtain current memory policy. 00:04:35.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.841 EAL: Restoring previous memory policy: 4 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was expanded by 10MB 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was shrunk by 10MB 00:04:35.841 EAL: Trying to obtain current memory policy. 00:04:35.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.841 EAL: Restoring previous memory policy: 4 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was expanded by 18MB 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was shrunk by 18MB 00:04:35.841 EAL: Trying to obtain current memory policy. 00:04:35.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.841 EAL: Restoring previous memory policy: 4 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was expanded by 34MB 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was shrunk by 34MB 00:04:35.841 EAL: Trying to obtain current memory policy. 00:04:35.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.841 EAL: Restoring previous memory policy: 4 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was expanded by 66MB 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was shrunk by 66MB 00:04:35.841 EAL: Trying to obtain current memory policy. 00:04:35.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.841 EAL: Restoring previous memory policy: 4 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was expanded by 130MB 00:04:35.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.841 EAL: request: mp_malloc_sync 00:04:35.841 EAL: No shared files mode enabled, IPC is disabled 00:04:35.841 EAL: Heap on socket 0 was shrunk by 130MB 00:04:35.841 EAL: Trying to obtain current memory policy. 00:04:35.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.099 EAL: Restoring previous memory policy: 4 00:04:36.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.099 EAL: request: mp_malloc_sync 00:04:36.099 EAL: No shared files mode enabled, IPC is disabled 00:04:36.099 EAL: Heap on socket 0 was expanded by 258MB 00:04:36.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.099 EAL: request: mp_malloc_sync 00:04:36.099 EAL: No shared files mode enabled, IPC is disabled 00:04:36.099 EAL: Heap on socket 0 was shrunk by 258MB 00:04:36.099 EAL: Trying to obtain current memory policy. 00:04:36.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.099 EAL: Restoring previous memory policy: 4 00:04:36.099 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.099 EAL: request: mp_malloc_sync 00:04:36.099 EAL: No shared files mode enabled, IPC is disabled 00:04:36.099 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.357 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.357 EAL: request: mp_malloc_sync 00:04:36.357 EAL: No shared files mode enabled, IPC is disabled 00:04:36.357 EAL: Heap on socket 0 was shrunk by 514MB 00:04:36.357 EAL: Trying to obtain current memory policy. 00:04:36.357 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.615 EAL: Restoring previous memory policy: 4 00:04:36.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.615 EAL: request: mp_malloc_sync 00:04:36.615 EAL: No shared files mode enabled, IPC is disabled 00:04:36.615 EAL: Heap on socket 0 was expanded by 1026MB 00:04:36.873 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.130 EAL: request: mp_malloc_sync 00:04:37.130 EAL: No shared files mode enabled, IPC is disabled 00:04:37.130 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:37.130 passed 00:04:37.130 00:04:37.130 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.130 suites 1 1 n/a 0 0 00:04:37.130 tests 2 2 2 0 0 00:04:37.130 asserts 497 497 497 0 n/a 00:04:37.130 00:04:37.130 Elapsed time = 1.308 seconds 00:04:37.130 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.130 EAL: request: mp_malloc_sync 00:04:37.130 EAL: No shared files mode enabled, IPC is disabled 00:04:37.130 EAL: Heap on socket 0 was shrunk by 2MB 00:04:37.130 EAL: No shared files mode enabled, IPC is disabled 00:04:37.130 EAL: No shared files mode enabled, IPC is disabled 00:04:37.130 EAL: No shared files mode enabled, IPC is disabled 00:04:37.130 00:04:37.130 real 0m1.429s 00:04:37.130 user 0m0.825s 00:04:37.130 sys 0m0.568s 00:04:37.130 12:16:09 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.130 12:16:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:37.130 ************************************ 00:04:37.130 END TEST env_vtophys 00:04:37.130 ************************************ 00:04:37.130 12:16:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.130 12:16:09 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.130 12:16:09 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.130 12:16:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.130 ************************************ 00:04:37.130 START TEST env_pci 00:04:37.130 ************************************ 00:04:37.130 12:16:09 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.130 00:04:37.130 00:04:37.130 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.130 http://cunit.sourceforge.net/ 00:04:37.130 00:04:37.130 00:04:37.130 Suite: pci 00:04:37.130 Test: pci_hook ...[2024-10-30 12:16:09.732688] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 483279 has claimed it 00:04:37.130 EAL: Cannot find device (10000:00:01.0) 00:04:37.130 EAL: Failed to attach device on primary process 00:04:37.130 passed 00:04:37.130 00:04:37.130 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.130 suites 1 1 n/a 0 0 00:04:37.130 tests 1 1 1 0 0 00:04:37.130 asserts 25 25 25 0 n/a 00:04:37.130 00:04:37.130 Elapsed time = 0.020 seconds 00:04:37.130 00:04:37.130 real 0m0.033s 00:04:37.130 user 0m0.014s 00:04:37.130 sys 0m0.019s 00:04:37.130 12:16:09 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.130 12:16:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:37.130 ************************************ 00:04:37.130 END TEST env_pci 00:04:37.130 ************************************ 00:04:37.130 12:16:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:37.130 12:16:09 env -- env/env.sh@15 -- # uname 00:04:37.130 12:16:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:37.130 12:16:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:37.130 12:16:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.130 12:16:09 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:37.130 12:16:09 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.130 12:16:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.130 ************************************ 00:04:37.130 START TEST env_dpdk_post_init 00:04:37.130 ************************************ 00:04:37.130 12:16:09 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.389 EAL: Detected CPU lcores: 48 00:04:37.389 EAL: Detected NUMA nodes: 2 00:04:37.389 EAL: Detected shared linkage of DPDK 00:04:37.389 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.389 EAL: Selected IOVA mode 'VA' 00:04:37.389 EAL: VFIO support initialized 00:04:37.389 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.389 EAL: Using IOMMU type 1 (Type 1) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:37.389 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:37.649 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:37.649 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:37.649 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:38.220 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:41.499 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:41.499 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:41.757 Starting DPDK initialization... 00:04:41.757 Starting SPDK post initialization... 00:04:41.757 SPDK NVMe probe 00:04:41.757 Attaching to 0000:88:00.0 00:04:41.757 Attached to 0000:88:00.0 00:04:41.757 Cleaning up... 00:04:41.758 00:04:41.758 real 0m4.388s 00:04:41.758 user 0m2.992s 00:04:41.758 sys 0m0.457s 00:04:41.758 12:16:14 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.758 12:16:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.758 ************************************ 00:04:41.758 END TEST env_dpdk_post_init 00:04:41.758 ************************************ 00:04:41.758 12:16:14 env -- env/env.sh@26 -- # uname 00:04:41.758 12:16:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.758 12:16:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.758 12:16:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.758 12:16:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.758 12:16:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.758 ************************************ 00:04:41.758 START TEST env_mem_callbacks 00:04:41.758 ************************************ 00:04:41.758 12:16:14 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.758 EAL: Detected CPU lcores: 48 00:04:41.758 EAL: Detected NUMA nodes: 2 00:04:41.758 EAL: Detected shared linkage of DPDK 00:04:41.758 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.758 EAL: Selected IOVA mode 'VA' 00:04:41.758 EAL: VFIO support initialized 00:04:41.758 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.758 00:04:41.758 00:04:41.758 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.758 http://cunit.sourceforge.net/ 00:04:41.758 00:04:41.758 00:04:41.758 Suite: memory 00:04:41.758 Test: test ... 00:04:41.758 register 0x200000200000 2097152 00:04:41.758 malloc 3145728 00:04:41.758 register 0x200000400000 4194304 00:04:41.758 buf 0x200000500000 len 3145728 PASSED 00:04:41.758 malloc 64 00:04:41.758 buf 0x2000004fff40 len 64 PASSED 00:04:41.758 malloc 4194304 00:04:41.758 register 0x200000800000 6291456 00:04:41.758 buf 0x200000a00000 len 4194304 PASSED 00:04:41.758 free 0x200000500000 3145728 00:04:41.758 free 0x2000004fff40 64 00:04:41.758 unregister 0x200000400000 4194304 PASSED 00:04:41.758 free 0x200000a00000 4194304 00:04:41.758 unregister 0x200000800000 6291456 PASSED 00:04:41.758 malloc 8388608 00:04:41.758 register 0x200000400000 10485760 00:04:41.758 buf 0x200000600000 len 8388608 PASSED 00:04:41.758 free 0x200000600000 8388608 00:04:41.758 unregister 0x200000400000 10485760 PASSED 00:04:41.758 passed 00:04:41.758 00:04:41.758 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.758 suites 1 1 n/a 0 0 00:04:41.758 tests 1 1 1 0 0 00:04:41.758 asserts 15 15 15 0 n/a 00:04:41.758 00:04:41.758 Elapsed time = 0.005 seconds 00:04:41.758 00:04:41.758 real 0m0.048s 00:04:41.758 user 0m0.013s 00:04:41.758 sys 0m0.035s 00:04:41.758 12:16:14 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.758 12:16:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:41.758 ************************************ 00:04:41.758 END TEST env_mem_callbacks 00:04:41.758 ************************************ 00:04:41.758 00:04:41.758 real 0m6.439s 00:04:41.758 user 0m4.182s 00:04:41.758 sys 0m1.302s 00:04:41.758 12:16:14 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.758 12:16:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.758 ************************************ 00:04:41.758 END TEST env 00:04:41.758 ************************************ 00:04:41.758 12:16:14 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:41.758 12:16:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.758 12:16:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.758 12:16:14 -- common/autotest_common.sh@10 -- # set +x 00:04:41.758 ************************************ 00:04:41.758 START TEST rpc 00:04:41.758 ************************************ 00:04:41.758 12:16:14 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:41.758 * Looking for test storage... 00:04:41.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:41.758 12:16:14 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:41.758 12:16:14 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:41.758 12:16:14 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.017 12:16:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.017 12:16:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.017 12:16:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.017 12:16:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.017 12:16:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.017 12:16:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.017 12:16:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.017 12:16:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.017 12:16:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.017 12:16:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.017 12:16:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.017 12:16:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:42.017 12:16:14 rpc -- scripts/common.sh@345 -- # : 1 00:04:42.017 12:16:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.017 12:16:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.017 12:16:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:42.017 12:16:14 rpc -- scripts/common.sh@353 -- # local d=1 00:04:42.017 12:16:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.017 12:16:14 rpc -- scripts/common.sh@355 -- # echo 1 00:04:42.017 12:16:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.017 12:16:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:42.017 12:16:14 rpc -- scripts/common.sh@353 -- # local d=2 00:04:42.017 12:16:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.017 12:16:14 rpc -- scripts/common.sh@355 -- # echo 2 00:04:42.017 12:16:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.017 12:16:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.017 12:16:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.017 12:16:14 rpc -- scripts/common.sh@368 -- # return 0 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.017 --rc genhtml_branch_coverage=1 00:04:42.017 --rc genhtml_function_coverage=1 00:04:42.017 --rc genhtml_legend=1 00:04:42.017 --rc geninfo_all_blocks=1 00:04:42.017 --rc geninfo_unexecuted_blocks=1 00:04:42.017 00:04:42.017 ' 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.017 --rc genhtml_branch_coverage=1 00:04:42.017 --rc genhtml_function_coverage=1 00:04:42.017 --rc genhtml_legend=1 00:04:42.017 --rc geninfo_all_blocks=1 00:04:42.017 --rc geninfo_unexecuted_blocks=1 00:04:42.017 00:04:42.017 ' 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.017 --rc genhtml_branch_coverage=1 00:04:42.017 --rc genhtml_function_coverage=1 00:04:42.017 --rc genhtml_legend=1 00:04:42.017 --rc geninfo_all_blocks=1 00:04:42.017 --rc geninfo_unexecuted_blocks=1 00:04:42.017 00:04:42.017 ' 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.017 --rc genhtml_branch_coverage=1 00:04:42.017 --rc genhtml_function_coverage=1 00:04:42.017 --rc genhtml_legend=1 00:04:42.017 --rc geninfo_all_blocks=1 00:04:42.017 --rc geninfo_unexecuted_blocks=1 00:04:42.017 00:04:42.017 ' 00:04:42.017 12:16:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=483949 00:04:42.017 12:16:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:42.017 12:16:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.017 12:16:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 483949 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@833 -- # '[' -z 483949 ']' 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.017 12:16:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.017 [2024-10-30 12:16:14.564353] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:04:42.018 [2024-10-30 12:16:14.564438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483949 ] 00:04:42.018 [2024-10-30 12:16:14.635151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.018 [2024-10-30 12:16:14.689425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.018 [2024-10-30 12:16:14.689485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 483949' to capture a snapshot of events at runtime. 00:04:42.018 [2024-10-30 12:16:14.689507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.018 [2024-10-30 12:16:14.689517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.018 [2024-10-30 12:16:14.689526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid483949 for offline analysis/debug. 00:04:42.018 [2024-10-30 12:16:14.690050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.305 12:16:14 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.305 12:16:14 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:42.305 12:16:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.305 12:16:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.305 12:16:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:42.305 12:16:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:42.305 12:16:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.305 12:16:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.305 12:16:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.584 ************************************ 00:04:42.584 START TEST rpc_integrity 00:04:42.584 ************************************ 00:04:42.584 12:16:14 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:42.584 12:16:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:42.584 12:16:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.584 12:16:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.584 12:16:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.584 12:16:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:42.584 12:16:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:42.584 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:42.584 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.584 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:42.584 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.584 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:42.584 { 00:04:42.584 "name": "Malloc0", 00:04:42.584 "aliases": [ 00:04:42.584 "c70ad047-4d88-4d36-b6f6-61c8f38b57c2" 00:04:42.584 ], 00:04:42.584 "product_name": "Malloc disk", 00:04:42.584 "block_size": 512, 00:04:42.584 "num_blocks": 16384, 00:04:42.584 "uuid": "c70ad047-4d88-4d36-b6f6-61c8f38b57c2", 00:04:42.584 "assigned_rate_limits": { 00:04:42.584 "rw_ios_per_sec": 0, 00:04:42.584 "rw_mbytes_per_sec": 0, 00:04:42.584 "r_mbytes_per_sec": 0, 00:04:42.584 "w_mbytes_per_sec": 0 00:04:42.584 }, 00:04:42.584 "claimed": false, 00:04:42.584 "zoned": false, 00:04:42.584 "supported_io_types": { 00:04:42.584 "read": true, 00:04:42.584 "write": true, 00:04:42.584 "unmap": true, 00:04:42.584 "flush": true, 00:04:42.584 "reset": true, 00:04:42.584 "nvme_admin": false, 00:04:42.584 "nvme_io": false, 00:04:42.584 "nvme_io_md": false, 00:04:42.584 "write_zeroes": true, 00:04:42.584 "zcopy": true, 00:04:42.584 "get_zone_info": false, 00:04:42.584 "zone_management": false, 00:04:42.584 "zone_append": false, 00:04:42.584 "compare": false, 00:04:42.584 "compare_and_write": false, 00:04:42.584 "abort": true, 00:04:42.584 "seek_hole": false, 00:04:42.584 "seek_data": false, 00:04:42.584 "copy": true, 00:04:42.584 "nvme_iov_md": false 00:04:42.584 }, 00:04:42.584 "memory_domains": [ 00:04:42.584 { 00:04:42.584 "dma_device_id": "system", 00:04:42.584 "dma_device_type": 1 00:04:42.584 }, 00:04:42.584 { 00:04:42.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.584 "dma_device_type": 2 00:04:42.584 } 00:04:42.584 ], 00:04:42.584 "driver_specific": {} 00:04:42.584 } 00:04:42.584 ]' 00:04:42.584 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:42.584 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:42.584 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.584 [2024-10-30 12:16:15.071773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:42.584 [2024-10-30 12:16:15.071811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:42.584 [2024-10-30 12:16:15.071831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2392740 00:04:42.584 [2024-10-30 12:16:15.071843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:42.584 [2024-10-30 12:16:15.073129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:42.584 [2024-10-30 12:16:15.073152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:42.584 Passthru0 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.584 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.584 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.584 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:42.584 { 00:04:42.584 "name": "Malloc0", 00:04:42.584 "aliases": [ 00:04:42.584 "c70ad047-4d88-4d36-b6f6-61c8f38b57c2" 00:04:42.584 ], 00:04:42.584 "product_name": "Malloc disk", 00:04:42.584 "block_size": 512, 00:04:42.584 "num_blocks": 16384, 00:04:42.584 "uuid": "c70ad047-4d88-4d36-b6f6-61c8f38b57c2", 00:04:42.584 "assigned_rate_limits": { 00:04:42.584 "rw_ios_per_sec": 0, 00:04:42.584 "rw_mbytes_per_sec": 0, 00:04:42.584 "r_mbytes_per_sec": 0, 00:04:42.584 "w_mbytes_per_sec": 0 00:04:42.584 }, 00:04:42.584 "claimed": true, 00:04:42.584 "claim_type": "exclusive_write", 00:04:42.584 "zoned": false, 00:04:42.584 "supported_io_types": { 00:04:42.584 "read": true, 00:04:42.584 "write": true, 00:04:42.584 "unmap": true, 00:04:42.584 "flush": true, 00:04:42.584 "reset": true, 00:04:42.584 "nvme_admin": false, 00:04:42.584 "nvme_io": false, 00:04:42.584 "nvme_io_md": false, 00:04:42.584 "write_zeroes": true, 00:04:42.584 "zcopy": true, 00:04:42.584 "get_zone_info": false, 00:04:42.584 "zone_management": false, 00:04:42.584 "zone_append": false, 00:04:42.584 "compare": false, 00:04:42.584 "compare_and_write": false, 00:04:42.584 "abort": true, 00:04:42.584 "seek_hole": false, 00:04:42.584 "seek_data": false, 00:04:42.584 "copy": true, 00:04:42.584 "nvme_iov_md": false 00:04:42.584 }, 00:04:42.584 "memory_domains": [ 00:04:42.584 { 00:04:42.584 "dma_device_id": "system", 00:04:42.584 "dma_device_type": 1 00:04:42.584 }, 00:04:42.585 { 00:04:42.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.585 "dma_device_type": 2 00:04:42.585 } 00:04:42.585 ], 00:04:42.585 "driver_specific": {} 00:04:42.585 }, 00:04:42.585 { 00:04:42.585 "name": "Passthru0", 00:04:42.585 "aliases": [ 00:04:42.585 "9e5472a8-9113-58b2-8f5c-077a075a88d0" 00:04:42.585 ], 00:04:42.585 "product_name": "passthru", 00:04:42.585 "block_size": 512, 00:04:42.585 "num_blocks": 16384, 00:04:42.585 "uuid": "9e5472a8-9113-58b2-8f5c-077a075a88d0", 00:04:42.585 "assigned_rate_limits": { 00:04:42.585 "rw_ios_per_sec": 0, 00:04:42.585 "rw_mbytes_per_sec": 0, 00:04:42.585 "r_mbytes_per_sec": 0, 00:04:42.585 "w_mbytes_per_sec": 0 00:04:42.585 }, 00:04:42.585 "claimed": false, 00:04:42.585 "zoned": false, 00:04:42.585 "supported_io_types": { 00:04:42.585 "read": true, 00:04:42.585 "write": true, 00:04:42.585 "unmap": true, 00:04:42.585 "flush": true, 00:04:42.585 "reset": true, 00:04:42.585 "nvme_admin": false, 00:04:42.585 "nvme_io": false, 00:04:42.585 "nvme_io_md": false, 00:04:42.585 "write_zeroes": true, 00:04:42.585 "zcopy": true, 00:04:42.585 "get_zone_info": false, 00:04:42.585 "zone_management": false, 00:04:42.585 "zone_append": false, 00:04:42.585 "compare": false, 00:04:42.585 "compare_and_write": false, 00:04:42.585 "abort": true, 00:04:42.585 "seek_hole": false, 00:04:42.585 "seek_data": false, 00:04:42.585 "copy": true, 00:04:42.585 "nvme_iov_md": false 00:04:42.585 }, 00:04:42.585 "memory_domains": [ 00:04:42.585 { 00:04:42.585 "dma_device_id": "system", 00:04:42.585 "dma_device_type": 1 00:04:42.585 }, 00:04:42.585 { 00:04:42.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.585 "dma_device_type": 2 00:04:42.585 } 00:04:42.585 ], 00:04:42.585 "driver_specific": { 00:04:42.585 "passthru": { 00:04:42.585 "name": "Passthru0", 00:04:42.585 "base_bdev_name": "Malloc0" 00:04:42.585 } 00:04:42.585 } 00:04:42.585 } 00:04:42.585 ]' 00:04:42.585 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:42.585 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:42.585 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.585 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.585 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.585 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:42.585 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:42.585 12:16:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:42.585 00:04:42.585 real 0m0.213s 00:04:42.585 user 0m0.139s 00:04:42.585 sys 0m0.020s 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.585 12:16:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.585 ************************************ 00:04:42.585 END TEST rpc_integrity 00:04:42.585 ************************************ 00:04:42.585 12:16:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:42.585 12:16:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.585 12:16:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.585 12:16:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.585 ************************************ 00:04:42.585 START TEST rpc_plugins 00:04:42.585 ************************************ 00:04:42.585 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:42.585 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:42.585 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.585 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.585 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.585 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:42.585 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:42.585 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.585 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.585 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.585 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:42.585 { 00:04:42.585 "name": "Malloc1", 00:04:42.585 "aliases": [ 00:04:42.585 "9310b142-e3ba-4f40-8434-34dd0665686e" 00:04:42.585 ], 00:04:42.585 "product_name": "Malloc disk", 00:04:42.585 "block_size": 4096, 00:04:42.585 "num_blocks": 256, 00:04:42.585 "uuid": "9310b142-e3ba-4f40-8434-34dd0665686e", 00:04:42.585 "assigned_rate_limits": { 00:04:42.585 "rw_ios_per_sec": 0, 00:04:42.585 "rw_mbytes_per_sec": 0, 00:04:42.585 "r_mbytes_per_sec": 0, 00:04:42.585 "w_mbytes_per_sec": 0 00:04:42.585 }, 00:04:42.585 "claimed": false, 00:04:42.585 "zoned": false, 00:04:42.585 "supported_io_types": { 00:04:42.585 "read": true, 00:04:42.585 "write": true, 00:04:42.585 "unmap": true, 00:04:42.585 "flush": true, 00:04:42.585 "reset": true, 00:04:42.585 "nvme_admin": false, 00:04:42.585 "nvme_io": false, 00:04:42.585 "nvme_io_md": false, 00:04:42.585 "write_zeroes": true, 00:04:42.585 "zcopy": true, 00:04:42.585 "get_zone_info": false, 00:04:42.585 "zone_management": false, 00:04:42.585 "zone_append": false, 00:04:42.585 "compare": false, 00:04:42.585 "compare_and_write": false, 00:04:42.585 "abort": true, 00:04:42.585 "seek_hole": false, 00:04:42.585 "seek_data": false, 00:04:42.585 "copy": true, 00:04:42.585 "nvme_iov_md": false 00:04:42.585 }, 00:04:42.585 "memory_domains": [ 00:04:42.585 { 00:04:42.585 "dma_device_id": "system", 00:04:42.585 "dma_device_type": 1 00:04:42.585 }, 00:04:42.585 { 00:04:42.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.585 "dma_device_type": 2 00:04:42.585 } 00:04:42.585 ], 00:04:42.585 "driver_specific": {} 00:04:42.585 } 00:04:42.585 ]' 00:04:42.585 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:42.863 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:42.863 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:42.863 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.863 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.863 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.863 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:42.863 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.863 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.863 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.863 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:42.863 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:42.863 12:16:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:42.863 00:04:42.863 real 0m0.116s 00:04:42.863 user 0m0.075s 00:04:42.863 sys 0m0.008s 00:04:42.863 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.863 12:16:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:42.863 ************************************ 00:04:42.863 END TEST rpc_plugins 00:04:42.863 ************************************ 00:04:42.864 12:16:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:42.864 12:16:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.864 12:16:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.864 12:16:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.864 ************************************ 00:04:42.864 START TEST rpc_trace_cmd_test 00:04:42.864 ************************************ 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:42.864 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid483949", 00:04:42.864 "tpoint_group_mask": "0x8", 00:04:42.864 "iscsi_conn": { 00:04:42.864 "mask": "0x2", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "scsi": { 00:04:42.864 "mask": "0x4", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "bdev": { 00:04:42.864 "mask": "0x8", 00:04:42.864 "tpoint_mask": "0xffffffffffffffff" 00:04:42.864 }, 00:04:42.864 "nvmf_rdma": { 00:04:42.864 "mask": "0x10", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "nvmf_tcp": { 00:04:42.864 "mask": "0x20", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "ftl": { 00:04:42.864 "mask": "0x40", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "blobfs": { 00:04:42.864 "mask": "0x80", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "dsa": { 00:04:42.864 "mask": "0x200", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "thread": { 00:04:42.864 "mask": "0x400", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "nvme_pcie": { 00:04:42.864 "mask": "0x800", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "iaa": { 00:04:42.864 "mask": "0x1000", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "nvme_tcp": { 00:04:42.864 "mask": "0x2000", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "bdev_nvme": { 00:04:42.864 "mask": "0x4000", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "sock": { 00:04:42.864 "mask": "0x8000", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "blob": { 00:04:42.864 "mask": "0x10000", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "bdev_raid": { 00:04:42.864 "mask": "0x20000", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 }, 00:04:42.864 "scheduler": { 00:04:42.864 "mask": "0x40000", 00:04:42.864 "tpoint_mask": "0x0" 00:04:42.864 } 00:04:42.864 }' 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:42.864 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:43.136 12:16:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:43.136 00:04:43.136 real 0m0.186s 00:04:43.136 user 0m0.165s 00:04:43.136 sys 0m0.014s 00:04:43.136 12:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.136 12:16:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.136 ************************************ 00:04:43.136 END TEST rpc_trace_cmd_test 00:04:43.136 ************************************ 00:04:43.136 12:16:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:43.136 12:16:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:43.136 12:16:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:43.136 12:16:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.136 12:16:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.136 12:16:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.136 ************************************ 00:04:43.136 START TEST rpc_daemon_integrity 00:04:43.136 ************************************ 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.136 { 00:04:43.136 "name": "Malloc2", 00:04:43.136 "aliases": [ 00:04:43.136 "648e9d44-bcc1-4d84-86ea-3bb139b5cc1c" 00:04:43.136 ], 00:04:43.136 "product_name": "Malloc disk", 00:04:43.136 "block_size": 512, 00:04:43.136 "num_blocks": 16384, 00:04:43.136 "uuid": "648e9d44-bcc1-4d84-86ea-3bb139b5cc1c", 00:04:43.136 "assigned_rate_limits": { 00:04:43.136 "rw_ios_per_sec": 0, 00:04:43.136 "rw_mbytes_per_sec": 0, 00:04:43.136 "r_mbytes_per_sec": 0, 00:04:43.136 "w_mbytes_per_sec": 0 00:04:43.136 }, 00:04:43.136 "claimed": false, 00:04:43.136 "zoned": false, 00:04:43.136 "supported_io_types": { 00:04:43.136 "read": true, 00:04:43.136 "write": true, 00:04:43.136 "unmap": true, 00:04:43.136 "flush": true, 00:04:43.136 "reset": true, 00:04:43.136 "nvme_admin": false, 00:04:43.136 "nvme_io": false, 00:04:43.136 "nvme_io_md": false, 00:04:43.136 "write_zeroes": true, 00:04:43.136 "zcopy": true, 00:04:43.136 "get_zone_info": false, 00:04:43.136 "zone_management": false, 00:04:43.136 "zone_append": false, 00:04:43.136 "compare": false, 00:04:43.136 "compare_and_write": false, 00:04:43.136 "abort": true, 00:04:43.136 "seek_hole": false, 00:04:43.136 "seek_data": false, 00:04:43.136 "copy": true, 00:04:43.136 "nvme_iov_md": false 00:04:43.136 }, 00:04:43.136 "memory_domains": [ 00:04:43.136 { 00:04:43.136 "dma_device_id": "system", 00:04:43.136 "dma_device_type": 1 00:04:43.136 }, 00:04:43.136 { 00:04:43.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.136 "dma_device_type": 2 00:04:43.136 } 00:04:43.136 ], 00:04:43.136 "driver_specific": {} 00:04:43.136 } 00:04:43.136 ]' 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.136 [2024-10-30 12:16:15.718120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:43.136 [2024-10-30 12:16:15.718159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.136 [2024-10-30 12:16:15.718182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2392d20 00:04:43.136 [2024-10-30 12:16:15.718195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.136 [2024-10-30 12:16:15.719390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.136 [2024-10-30 12:16:15.719416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.136 Passthru0 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.136 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.136 { 00:04:43.136 "name": "Malloc2", 00:04:43.136 "aliases": [ 00:04:43.136 "648e9d44-bcc1-4d84-86ea-3bb139b5cc1c" 00:04:43.136 ], 00:04:43.136 "product_name": "Malloc disk", 00:04:43.136 "block_size": 512, 00:04:43.136 "num_blocks": 16384, 00:04:43.136 "uuid": "648e9d44-bcc1-4d84-86ea-3bb139b5cc1c", 00:04:43.136 "assigned_rate_limits": { 00:04:43.136 "rw_ios_per_sec": 0, 00:04:43.136 "rw_mbytes_per_sec": 0, 00:04:43.136 "r_mbytes_per_sec": 0, 00:04:43.136 "w_mbytes_per_sec": 0 00:04:43.136 }, 00:04:43.136 "claimed": true, 00:04:43.136 "claim_type": "exclusive_write", 00:04:43.136 "zoned": false, 00:04:43.136 "supported_io_types": { 00:04:43.136 "read": true, 00:04:43.136 "write": true, 00:04:43.136 "unmap": true, 00:04:43.136 "flush": true, 00:04:43.136 "reset": true, 00:04:43.136 "nvme_admin": false, 00:04:43.136 "nvme_io": false, 00:04:43.136 "nvme_io_md": false, 00:04:43.136 "write_zeroes": true, 00:04:43.136 "zcopy": true, 00:04:43.136 "get_zone_info": false, 00:04:43.136 "zone_management": false, 00:04:43.136 "zone_append": false, 00:04:43.136 "compare": false, 00:04:43.136 "compare_and_write": false, 00:04:43.136 "abort": true, 00:04:43.136 "seek_hole": false, 00:04:43.136 "seek_data": false, 00:04:43.136 "copy": true, 00:04:43.136 "nvme_iov_md": false 00:04:43.136 }, 00:04:43.136 "memory_domains": [ 00:04:43.136 { 00:04:43.136 "dma_device_id": "system", 00:04:43.136 "dma_device_type": 1 00:04:43.136 }, 00:04:43.136 { 00:04:43.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.137 "dma_device_type": 2 00:04:43.137 } 00:04:43.137 ], 00:04:43.137 "driver_specific": {} 00:04:43.137 }, 00:04:43.137 { 00:04:43.137 "name": "Passthru0", 00:04:43.137 "aliases": [ 00:04:43.137 "a038d905-76f1-55ec-bc51-d1caa30a2350" 00:04:43.137 ], 00:04:43.137 "product_name": "passthru", 00:04:43.137 "block_size": 512, 00:04:43.137 "num_blocks": 16384, 00:04:43.137 "uuid": "a038d905-76f1-55ec-bc51-d1caa30a2350", 00:04:43.137 "assigned_rate_limits": { 00:04:43.137 "rw_ios_per_sec": 0, 00:04:43.137 "rw_mbytes_per_sec": 0, 00:04:43.137 "r_mbytes_per_sec": 0, 00:04:43.137 "w_mbytes_per_sec": 0 00:04:43.137 }, 00:04:43.137 "claimed": false, 00:04:43.137 "zoned": false, 00:04:43.137 "supported_io_types": { 00:04:43.137 "read": true, 00:04:43.137 "write": true, 00:04:43.137 "unmap": true, 00:04:43.137 "flush": true, 00:04:43.137 "reset": true, 00:04:43.137 "nvme_admin": false, 00:04:43.137 "nvme_io": false, 00:04:43.137 "nvme_io_md": false, 00:04:43.137 "write_zeroes": true, 00:04:43.137 "zcopy": true, 00:04:43.137 "get_zone_info": false, 00:04:43.137 "zone_management": false, 00:04:43.137 "zone_append": false, 00:04:43.137 "compare": false, 00:04:43.137 "compare_and_write": false, 00:04:43.137 "abort": true, 00:04:43.137 "seek_hole": false, 00:04:43.137 "seek_data": false, 00:04:43.137 "copy": true, 00:04:43.137 "nvme_iov_md": false 00:04:43.137 }, 00:04:43.137 "memory_domains": [ 00:04:43.137 { 00:04:43.137 "dma_device_id": "system", 00:04:43.137 "dma_device_type": 1 00:04:43.137 }, 00:04:43.137 { 00:04:43.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.137 "dma_device_type": 2 00:04:43.137 } 00:04:43.137 ], 00:04:43.137 "driver_specific": { 00:04:43.137 "passthru": { 00:04:43.137 "name": "Passthru0", 00:04:43.137 "base_bdev_name": "Malloc2" 00:04:43.137 } 00:04:43.137 } 00:04:43.137 } 00:04:43.137 ]' 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.137 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.396 12:16:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.396 00:04:43.396 real 0m0.215s 00:04:43.396 user 0m0.139s 00:04:43.396 sys 0m0.022s 00:04:43.396 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.396 12:16:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.396 ************************************ 00:04:43.396 END TEST rpc_daemon_integrity 00:04:43.396 ************************************ 00:04:43.396 12:16:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:43.396 12:16:15 rpc -- rpc/rpc.sh@84 -- # killprocess 483949 00:04:43.396 12:16:15 rpc -- common/autotest_common.sh@952 -- # '[' -z 483949 ']' 00:04:43.396 12:16:15 rpc -- common/autotest_common.sh@956 -- # kill -0 483949 00:04:43.396 12:16:15 rpc -- common/autotest_common.sh@957 -- # uname 00:04:43.396 12:16:15 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:43.396 12:16:15 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 483949 00:04:43.396 12:16:15 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:43.396 12:16:15 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:43.396 12:16:15 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 483949' 00:04:43.396 killing process with pid 483949 00:04:43.396 12:16:15 rpc -- common/autotest_common.sh@971 -- # kill 483949 00:04:43.396 12:16:15 rpc -- common/autotest_common.sh@976 -- # wait 483949 00:04:43.657 00:04:43.657 real 0m1.940s 00:04:43.657 user 0m2.388s 00:04:43.657 sys 0m0.615s 00:04:43.657 12:16:16 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.657 12:16:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.657 ************************************ 00:04:43.657 END TEST rpc 00:04:43.657 ************************************ 00:04:43.657 12:16:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:43.657 12:16:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.657 12:16:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.657 12:16:16 -- common/autotest_common.sh@10 -- # set +x 00:04:43.917 ************************************ 00:04:43.917 START TEST skip_rpc 00:04:43.917 ************************************ 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:43.917 * Looking for test storage... 00:04:43.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.917 12:16:16 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:43.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.917 --rc genhtml_branch_coverage=1 00:04:43.917 --rc genhtml_function_coverage=1 00:04:43.917 --rc genhtml_legend=1 00:04:43.917 --rc geninfo_all_blocks=1 00:04:43.917 --rc geninfo_unexecuted_blocks=1 00:04:43.917 00:04:43.917 ' 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:43.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.917 --rc genhtml_branch_coverage=1 00:04:43.917 --rc genhtml_function_coverage=1 00:04:43.917 --rc genhtml_legend=1 00:04:43.917 --rc geninfo_all_blocks=1 00:04:43.917 --rc geninfo_unexecuted_blocks=1 00:04:43.917 00:04:43.917 ' 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:43.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.917 --rc genhtml_branch_coverage=1 00:04:43.917 --rc genhtml_function_coverage=1 00:04:43.917 --rc genhtml_legend=1 00:04:43.917 --rc geninfo_all_blocks=1 00:04:43.917 --rc geninfo_unexecuted_blocks=1 00:04:43.917 00:04:43.917 ' 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:43.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.917 --rc genhtml_branch_coverage=1 00:04:43.917 --rc genhtml_function_coverage=1 00:04:43.917 --rc genhtml_legend=1 00:04:43.917 --rc geninfo_all_blocks=1 00:04:43.917 --rc geninfo_unexecuted_blocks=1 00:04:43.917 00:04:43.917 ' 00:04:43.917 12:16:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.917 12:16:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:43.917 12:16:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.917 12:16:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.917 ************************************ 00:04:43.917 START TEST skip_rpc 00:04:43.917 ************************************ 00:04:43.917 12:16:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:43.917 12:16:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=484390 00:04:43.917 12:16:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:43.917 12:16:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.917 12:16:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:43.917 [2024-10-30 12:16:16.579621] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:04:43.917 [2024-10-30 12:16:16.579714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484390 ] 00:04:44.175 [2024-10-30 12:16:16.647345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.175 [2024-10-30 12:16:16.703654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:49.437 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 484390 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 484390 ']' 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 484390 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 484390 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 484390' 00:04:49.438 killing process with pid 484390 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 484390 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 484390 00:04:49.438 00:04:49.438 real 0m5.464s 00:04:49.438 user 0m5.164s 00:04:49.438 sys 0m0.308s 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.438 12:16:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.438 ************************************ 00:04:49.438 END TEST skip_rpc 00:04:49.438 ************************************ 00:04:49.438 12:16:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:49.438 12:16:22 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.438 12:16:22 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.438 12:16:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.438 ************************************ 00:04:49.438 START TEST skip_rpc_with_json 00:04:49.438 ************************************ 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=485082 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 485082 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 485082 ']' 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.438 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.438 [2024-10-30 12:16:22.092287] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:04:49.438 [2024-10-30 12:16:22.092392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485082 ] 00:04:49.698 [2024-10-30 12:16:22.157344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.698 [2024-10-30 12:16:22.212895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.958 [2024-10-30 12:16:22.475040] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:49.958 request: 00:04:49.958 { 00:04:49.958 "trtype": "tcp", 00:04:49.958 "method": "nvmf_get_transports", 00:04:49.958 "req_id": 1 00:04:49.958 } 00:04:49.958 Got JSON-RPC error response 00:04:49.958 response: 00:04:49.958 { 00:04:49.958 "code": -19, 00:04:49.958 "message": "No such device" 00:04:49.958 } 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.958 [2024-10-30 12:16:22.483161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.958 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.217 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.217 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.217 { 00:04:50.217 "subsystems": [ 00:04:50.217 { 00:04:50.217 "subsystem": "fsdev", 00:04:50.217 "config": [ 00:04:50.217 { 00:04:50.217 "method": "fsdev_set_opts", 00:04:50.217 "params": { 00:04:50.217 "fsdev_io_pool_size": 65535, 00:04:50.217 "fsdev_io_cache_size": 256 00:04:50.217 } 00:04:50.217 } 00:04:50.217 ] 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "subsystem": "vfio_user_target", 00:04:50.217 "config": null 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "subsystem": "keyring", 00:04:50.217 "config": [] 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "subsystem": "iobuf", 00:04:50.217 "config": [ 00:04:50.217 { 00:04:50.217 "method": "iobuf_set_options", 00:04:50.217 "params": { 00:04:50.217 "small_pool_count": 8192, 00:04:50.217 "large_pool_count": 1024, 00:04:50.217 "small_bufsize": 8192, 00:04:50.217 "large_bufsize": 135168, 00:04:50.217 "enable_numa": false 00:04:50.217 } 00:04:50.217 } 00:04:50.217 ] 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "subsystem": "sock", 00:04:50.217 "config": [ 00:04:50.217 { 00:04:50.217 "method": "sock_set_default_impl", 00:04:50.217 "params": { 00:04:50.217 "impl_name": "posix" 00:04:50.217 } 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "method": "sock_impl_set_options", 00:04:50.217 "params": { 00:04:50.217 "impl_name": "ssl", 00:04:50.217 "recv_buf_size": 4096, 00:04:50.217 "send_buf_size": 4096, 00:04:50.217 "enable_recv_pipe": true, 00:04:50.217 "enable_quickack": false, 00:04:50.217 "enable_placement_id": 0, 00:04:50.217 "enable_zerocopy_send_server": true, 00:04:50.217 "enable_zerocopy_send_client": false, 00:04:50.217 "zerocopy_threshold": 0, 00:04:50.217 "tls_version": 0, 00:04:50.217 "enable_ktls": false 00:04:50.217 } 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "method": "sock_impl_set_options", 00:04:50.217 "params": { 00:04:50.217 "impl_name": "posix", 00:04:50.217 "recv_buf_size": 2097152, 00:04:50.217 "send_buf_size": 2097152, 00:04:50.217 "enable_recv_pipe": true, 00:04:50.217 "enable_quickack": false, 00:04:50.217 "enable_placement_id": 0, 00:04:50.217 "enable_zerocopy_send_server": true, 00:04:50.217 "enable_zerocopy_send_client": false, 00:04:50.217 "zerocopy_threshold": 0, 00:04:50.217 "tls_version": 0, 00:04:50.217 "enable_ktls": false 00:04:50.217 } 00:04:50.217 } 00:04:50.217 ] 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "subsystem": "vmd", 00:04:50.217 "config": [] 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "subsystem": "accel", 00:04:50.217 "config": [ 00:04:50.217 { 00:04:50.217 "method": "accel_set_options", 00:04:50.217 "params": { 00:04:50.217 "small_cache_size": 128, 00:04:50.217 "large_cache_size": 16, 00:04:50.217 "task_count": 2048, 00:04:50.217 "sequence_count": 2048, 00:04:50.217 "buf_count": 2048 00:04:50.217 } 00:04:50.217 } 00:04:50.217 ] 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "subsystem": "bdev", 00:04:50.217 "config": [ 00:04:50.217 { 00:04:50.217 "method": "bdev_set_options", 00:04:50.217 "params": { 00:04:50.217 "bdev_io_pool_size": 65535, 00:04:50.217 "bdev_io_cache_size": 256, 00:04:50.217 "bdev_auto_examine": true, 00:04:50.217 "iobuf_small_cache_size": 128, 00:04:50.217 "iobuf_large_cache_size": 16 00:04:50.217 } 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "method": "bdev_raid_set_options", 00:04:50.217 "params": { 00:04:50.217 "process_window_size_kb": 1024, 00:04:50.217 "process_max_bandwidth_mb_sec": 0 00:04:50.217 } 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "method": "bdev_iscsi_set_options", 00:04:50.217 "params": { 00:04:50.217 "timeout_sec": 30 00:04:50.217 } 00:04:50.217 }, 00:04:50.217 { 00:04:50.217 "method": "bdev_nvme_set_options", 00:04:50.217 "params": { 00:04:50.217 "action_on_timeout": "none", 00:04:50.217 "timeout_us": 0, 00:04:50.217 "timeout_admin_us": 0, 00:04:50.217 "keep_alive_timeout_ms": 10000, 00:04:50.217 "arbitration_burst": 0, 00:04:50.217 "low_priority_weight": 0, 00:04:50.217 "medium_priority_weight": 0, 00:04:50.217 "high_priority_weight": 0, 00:04:50.217 "nvme_adminq_poll_period_us": 10000, 00:04:50.217 "nvme_ioq_poll_period_us": 0, 00:04:50.217 "io_queue_requests": 0, 00:04:50.217 "delay_cmd_submit": true, 00:04:50.217 "transport_retry_count": 4, 00:04:50.217 "bdev_retry_count": 3, 00:04:50.217 "transport_ack_timeout": 0, 00:04:50.218 "ctrlr_loss_timeout_sec": 0, 00:04:50.218 "reconnect_delay_sec": 0, 00:04:50.218 "fast_io_fail_timeout_sec": 0, 00:04:50.218 "disable_auto_failback": false, 00:04:50.218 "generate_uuids": false, 00:04:50.218 "transport_tos": 0, 00:04:50.218 "nvme_error_stat": false, 00:04:50.218 "rdma_srq_size": 0, 00:04:50.218 "io_path_stat": false, 00:04:50.218 "allow_accel_sequence": false, 00:04:50.218 "rdma_max_cq_size": 0, 00:04:50.218 "rdma_cm_event_timeout_ms": 0, 00:04:50.218 "dhchap_digests": [ 00:04:50.218 "sha256", 00:04:50.218 "sha384", 00:04:50.218 "sha512" 00:04:50.218 ], 00:04:50.218 "dhchap_dhgroups": [ 00:04:50.218 "null", 00:04:50.218 "ffdhe2048", 00:04:50.218 "ffdhe3072", 00:04:50.218 "ffdhe4096", 00:04:50.218 "ffdhe6144", 00:04:50.218 "ffdhe8192" 00:04:50.218 ] 00:04:50.218 } 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "method": "bdev_nvme_set_hotplug", 00:04:50.218 "params": { 00:04:50.218 "period_us": 100000, 00:04:50.218 "enable": false 00:04:50.218 } 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "method": "bdev_wait_for_examine" 00:04:50.218 } 00:04:50.218 ] 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "subsystem": "scsi", 00:04:50.218 "config": null 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "subsystem": "scheduler", 00:04:50.218 "config": [ 00:04:50.218 { 00:04:50.218 "method": "framework_set_scheduler", 00:04:50.218 "params": { 00:04:50.218 "name": "static" 00:04:50.218 } 00:04:50.218 } 00:04:50.218 ] 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "subsystem": "vhost_scsi", 00:04:50.218 "config": [] 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "subsystem": "vhost_blk", 00:04:50.218 "config": [] 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "subsystem": "ublk", 00:04:50.218 "config": [] 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "subsystem": "nbd", 00:04:50.218 "config": [] 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "subsystem": "nvmf", 00:04:50.218 "config": [ 00:04:50.218 { 00:04:50.218 "method": "nvmf_set_config", 00:04:50.218 "params": { 00:04:50.218 "discovery_filter": "match_any", 00:04:50.218 "admin_cmd_passthru": { 00:04:50.218 "identify_ctrlr": false 00:04:50.218 }, 00:04:50.218 "dhchap_digests": [ 00:04:50.218 "sha256", 00:04:50.218 "sha384", 00:04:50.218 "sha512" 00:04:50.218 ], 00:04:50.218 "dhchap_dhgroups": [ 00:04:50.218 "null", 00:04:50.218 "ffdhe2048", 00:04:50.218 "ffdhe3072", 00:04:50.218 "ffdhe4096", 00:04:50.218 "ffdhe6144", 00:04:50.218 "ffdhe8192" 00:04:50.218 ] 00:04:50.218 } 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "method": "nvmf_set_max_subsystems", 00:04:50.218 "params": { 00:04:50.218 "max_subsystems": 1024 00:04:50.218 } 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "method": "nvmf_set_crdt", 00:04:50.218 "params": { 00:04:50.218 "crdt1": 0, 00:04:50.218 "crdt2": 0, 00:04:50.218 "crdt3": 0 00:04:50.218 } 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "method": "nvmf_create_transport", 00:04:50.218 "params": { 00:04:50.218 "trtype": "TCP", 00:04:50.218 "max_queue_depth": 128, 00:04:50.218 "max_io_qpairs_per_ctrlr": 127, 00:04:50.218 "in_capsule_data_size": 4096, 00:04:50.218 "max_io_size": 131072, 00:04:50.218 "io_unit_size": 131072, 00:04:50.218 "max_aq_depth": 128, 00:04:50.218 "num_shared_buffers": 511, 00:04:50.218 "buf_cache_size": 4294967295, 00:04:50.218 "dif_insert_or_strip": false, 00:04:50.218 "zcopy": false, 00:04:50.218 "c2h_success": true, 00:04:50.218 "sock_priority": 0, 00:04:50.218 "abort_timeout_sec": 1, 00:04:50.218 "ack_timeout": 0, 00:04:50.218 "data_wr_pool_size": 0 00:04:50.218 } 00:04:50.218 } 00:04:50.218 ] 00:04:50.218 }, 00:04:50.218 { 00:04:50.218 "subsystem": "iscsi", 00:04:50.218 "config": [ 00:04:50.218 { 00:04:50.218 "method": "iscsi_set_options", 00:04:50.218 "params": { 00:04:50.218 "node_base": "iqn.2016-06.io.spdk", 00:04:50.218 "max_sessions": 128, 00:04:50.218 "max_connections_per_session": 2, 00:04:50.218 "max_queue_depth": 64, 00:04:50.218 "default_time2wait": 2, 00:04:50.218 "default_time2retain": 20, 00:04:50.218 "first_burst_length": 8192, 00:04:50.218 "immediate_data": true, 00:04:50.218 "allow_duplicated_isid": false, 00:04:50.218 "error_recovery_level": 0, 00:04:50.218 "nop_timeout": 60, 00:04:50.218 "nop_in_interval": 30, 00:04:50.218 "disable_chap": false, 00:04:50.218 "require_chap": false, 00:04:50.218 "mutual_chap": false, 00:04:50.218 "chap_group": 0, 00:04:50.218 "max_large_datain_per_connection": 64, 00:04:50.218 "max_r2t_per_connection": 4, 00:04:50.218 "pdu_pool_size": 36864, 00:04:50.218 "immediate_data_pool_size": 16384, 00:04:50.218 "data_out_pool_size": 2048 00:04:50.218 } 00:04:50.218 } 00:04:50.218 ] 00:04:50.218 } 00:04:50.218 ] 00:04:50.218 } 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 485082 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 485082 ']' 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 485082 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 485082 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 485082' 00:04:50.218 killing process with pid 485082 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 485082 00:04:50.218 12:16:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 485082 00:04:50.477 12:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=485222 00:04:50.477 12:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.477 12:16:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 485222 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 485222 ']' 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 485222 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 485222 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 485222' 00:04:55.763 killing process with pid 485222 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 485222 00:04:55.763 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 485222 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:56.023 00:04:56.023 real 0m6.524s 00:04:56.023 user 0m6.153s 00:04:56.023 sys 0m0.688s 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.023 ************************************ 00:04:56.023 END TEST skip_rpc_with_json 00:04:56.023 ************************************ 00:04:56.023 12:16:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:56.023 12:16:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.023 12:16:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.023 12:16:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.023 ************************************ 00:04:56.023 START TEST skip_rpc_with_delay 00:04:56.023 ************************************ 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.023 [2024-10-30 12:16:28.666505] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.023 00:04:56.023 real 0m0.074s 00:04:56.023 user 0m0.048s 00:04:56.023 sys 0m0.026s 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.023 12:16:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:56.023 ************************************ 00:04:56.023 END TEST skip_rpc_with_delay 00:04:56.023 ************************************ 00:04:56.023 12:16:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:56.023 12:16:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:56.023 12:16:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:56.023 12:16:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.023 12:16:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.023 12:16:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.283 ************************************ 00:04:56.283 START TEST exit_on_failed_rpc_init 00:04:56.283 ************************************ 00:04:56.283 12:16:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:56.283 12:16:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=485935 00:04:56.283 12:16:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.283 12:16:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 485935 00:04:56.283 12:16:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 485935 ']' 00:04:56.283 12:16:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.283 12:16:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.283 12:16:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.283 12:16:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.283 12:16:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.283 [2024-10-30 12:16:28.783644] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:04:56.283 [2024-10-30 12:16:28.783748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485935 ] 00:04:56.283 [2024-10-30 12:16:28.850751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.283 [2024-10-30 12:16:28.903976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:56.542 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.801 [2024-10-30 12:16:29.227826] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:04:56.801 [2024-10-30 12:16:29.227920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485960 ] 00:04:56.801 [2024-10-30 12:16:29.296376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.801 [2024-10-30 12:16:29.355229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.801 [2024-10-30 12:16:29.355366] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:56.801 [2024-10-30 12:16:29.355386] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:56.801 [2024-10-30 12:16:29.355398] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 485935 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 485935 ']' 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 485935 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 485935 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.801 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.802 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 485935' 00:04:56.802 killing process with pid 485935 00:04:56.802 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 485935 00:04:56.802 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 485935 00:04:57.367 00:04:57.367 real 0m1.165s 00:04:57.367 user 0m1.291s 00:04:57.367 sys 0m0.426s 00:04:57.367 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.367 12:16:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.367 ************************************ 00:04:57.367 END TEST exit_on_failed_rpc_init 00:04:57.367 ************************************ 00:04:57.367 12:16:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.367 00:04:57.367 real 0m13.568s 00:04:57.367 user 0m12.838s 00:04:57.367 sys 0m1.625s 00:04:57.367 12:16:29 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.367 12:16:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.367 ************************************ 00:04:57.367 END TEST skip_rpc 00:04:57.367 ************************************ 00:04:57.367 12:16:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.367 12:16:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.367 12:16:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.367 12:16:29 -- common/autotest_common.sh@10 -- # set +x 00:04:57.367 ************************************ 00:04:57.367 START TEST rpc_client 00:04:57.367 ************************************ 00:04:57.367 12:16:29 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:57.367 * Looking for test storage... 00:04:57.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:57.367 12:16:30 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.367 12:16:30 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.367 12:16:30 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.627 12:16:30 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.627 12:16:30 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:57.627 12:16:30 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.627 12:16:30 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.627 --rc genhtml_branch_coverage=1 00:04:57.627 --rc genhtml_function_coverage=1 00:04:57.627 --rc genhtml_legend=1 00:04:57.627 --rc geninfo_all_blocks=1 00:04:57.627 --rc geninfo_unexecuted_blocks=1 00:04:57.627 00:04:57.627 ' 00:04:57.627 12:16:30 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.627 --rc genhtml_branch_coverage=1 00:04:57.627 --rc genhtml_function_coverage=1 00:04:57.627 --rc genhtml_legend=1 00:04:57.627 --rc geninfo_all_blocks=1 00:04:57.627 --rc geninfo_unexecuted_blocks=1 00:04:57.627 00:04:57.627 ' 00:04:57.627 12:16:30 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.627 --rc genhtml_branch_coverage=1 00:04:57.627 --rc genhtml_function_coverage=1 00:04:57.627 --rc genhtml_legend=1 00:04:57.627 --rc geninfo_all_blocks=1 00:04:57.627 --rc geninfo_unexecuted_blocks=1 00:04:57.627 00:04:57.627 ' 00:04:57.627 12:16:30 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.627 --rc genhtml_branch_coverage=1 00:04:57.627 --rc genhtml_function_coverage=1 00:04:57.627 --rc genhtml_legend=1 00:04:57.627 --rc geninfo_all_blocks=1 00:04:57.627 --rc geninfo_unexecuted_blocks=1 00:04:57.627 00:04:57.627 ' 00:04:57.627 12:16:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:57.627 OK 00:04:57.627 12:16:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:57.627 00:04:57.627 real 0m0.164s 00:04:57.627 user 0m0.106s 00:04:57.627 sys 0m0.067s 00:04:57.627 12:16:30 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.627 12:16:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:57.627 ************************************ 00:04:57.627 END TEST rpc_client 00:04:57.627 ************************************ 00:04:57.627 12:16:30 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.627 12:16:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.627 12:16:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.627 12:16:30 -- common/autotest_common.sh@10 -- # set +x 00:04:57.627 ************************************ 00:04:57.627 START TEST json_config 00:04:57.627 ************************************ 00:04:57.627 12:16:30 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:57.627 12:16:30 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.627 12:16:30 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.627 12:16:30 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.627 12:16:30 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.627 12:16:30 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.627 12:16:30 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.627 12:16:30 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.627 12:16:30 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.627 12:16:30 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.627 12:16:30 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.627 12:16:30 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.627 12:16:30 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.627 12:16:30 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.627 12:16:30 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.627 12:16:30 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.627 12:16:30 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:57.627 12:16:30 json_config -- scripts/common.sh@345 -- # : 1 00:04:57.627 12:16:30 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.627 12:16:30 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.627 12:16:30 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:57.627 12:16:30 json_config -- scripts/common.sh@353 -- # local d=1 00:04:57.627 12:16:30 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.627 12:16:30 json_config -- scripts/common.sh@355 -- # echo 1 00:04:57.627 12:16:30 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.627 12:16:30 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:57.627 12:16:30 json_config -- scripts/common.sh@353 -- # local d=2 00:04:57.627 12:16:30 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.627 12:16:30 json_config -- scripts/common.sh@355 -- # echo 2 00:04:57.627 12:16:30 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.627 12:16:30 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.627 12:16:30 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.627 12:16:30 json_config -- scripts/common.sh@368 -- # return 0 00:04:57.627 12:16:30 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.627 12:16:30 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.627 --rc genhtml_branch_coverage=1 00:04:57.627 --rc genhtml_function_coverage=1 00:04:57.627 --rc genhtml_legend=1 00:04:57.627 --rc geninfo_all_blocks=1 00:04:57.627 --rc geninfo_unexecuted_blocks=1 00:04:57.627 00:04:57.627 ' 00:04:57.627 12:16:30 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.627 --rc genhtml_branch_coverage=1 00:04:57.627 --rc genhtml_function_coverage=1 00:04:57.627 --rc genhtml_legend=1 00:04:57.627 --rc geninfo_all_blocks=1 00:04:57.627 --rc geninfo_unexecuted_blocks=1 00:04:57.627 00:04:57.627 ' 00:04:57.627 12:16:30 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.627 --rc genhtml_branch_coverage=1 00:04:57.627 --rc genhtml_function_coverage=1 00:04:57.627 --rc genhtml_legend=1 00:04:57.627 --rc geninfo_all_blocks=1 00:04:57.627 --rc geninfo_unexecuted_blocks=1 00:04:57.627 00:04:57.627 ' 00:04:57.627 12:16:30 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.627 --rc genhtml_branch_coverage=1 00:04:57.627 --rc genhtml_function_coverage=1 00:04:57.627 --rc genhtml_legend=1 00:04:57.627 --rc geninfo_all_blocks=1 00:04:57.627 --rc geninfo_unexecuted_blocks=1 00:04:57.627 00:04:57.627 ' 00:04:57.627 12:16:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.627 12:16:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:57.628 12:16:30 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.628 12:16:30 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.628 12:16:30 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.628 12:16:30 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.628 12:16:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.628 12:16:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.628 12:16:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.628 12:16:30 json_config -- paths/export.sh@5 -- # export PATH 00:04:57.628 12:16:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@51 -- # : 0 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:57.628 12:16:30 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:57.628 12:16:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:57.889 12:16:30 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:57.889 12:16:30 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:57.889 INFO: JSON configuration test init 00:04:57.889 12:16:30 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:57.889 12:16:30 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:57.889 12:16:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.889 12:16:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.889 12:16:30 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:57.889 12:16:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.889 12:16:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.889 12:16:30 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:57.889 12:16:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:57.889 12:16:30 json_config -- json_config/common.sh@10 -- # shift 00:04:57.889 12:16:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:57.889 12:16:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:57.889 12:16:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:57.889 12:16:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.889 12:16:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.889 12:16:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=486240 00:04:57.889 12:16:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:57.889 12:16:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:57.889 Waiting for target to run... 00:04:57.889 12:16:30 json_config -- json_config/common.sh@25 -- # waitforlisten 486240 /var/tmp/spdk_tgt.sock 00:04:57.889 12:16:30 json_config -- common/autotest_common.sh@833 -- # '[' -z 486240 ']' 00:04:57.889 12:16:30 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:57.889 12:16:30 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.889 12:16:30 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:57.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:57.889 12:16:30 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.889 12:16:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.889 [2024-10-30 12:16:30.372586] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:04:57.889 [2024-10-30 12:16:30.372681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486240 ] 00:04:58.146 [2024-10-30 12:16:30.715452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.146 [2024-10-30 12:16:30.756550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.713 12:16:31 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.713 12:16:31 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:58.713 12:16:31 json_config -- json_config/common.sh@26 -- # echo '' 00:04:58.713 00:04:58.713 12:16:31 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:58.713 12:16:31 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:58.713 12:16:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.713 12:16:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.713 12:16:31 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:58.713 12:16:31 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:58.713 12:16:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.713 12:16:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.713 12:16:31 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:58.970 12:16:31 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:58.970 12:16:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:02.261 12:16:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.261 12:16:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:02.261 12:16:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@54 -- # sort 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:02.261 12:16:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.261 12:16:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:02.261 12:16:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.261 12:16:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:02.261 12:16:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:02.261 12:16:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:02.520 MallocForNvmf0 00:05:02.520 12:16:35 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:02.520 12:16:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:02.778 MallocForNvmf1 00:05:02.778 12:16:35 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:02.778 12:16:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.037 [2024-10-30 12:16:35.658776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.037 12:16:35 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.037 12:16:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.297 12:16:35 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.297 12:16:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:03.555 12:16:36 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:03.555 12:16:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:03.813 12:16:36 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:03.813 12:16:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.071 [2024-10-30 12:16:36.734136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.071 12:16:36 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:04.071 12:16:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.071 12:16:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.330 12:16:36 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:04.330 12:16:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.330 12:16:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.330 12:16:36 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:04.330 12:16:36 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.330 12:16:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:04.588 MallocBdevForConfigChangeCheck 00:05:04.588 12:16:37 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:04.588 12:16:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.588 12:16:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.588 12:16:37 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:04.588 12:16:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.846 12:16:37 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:04.846 INFO: shutting down applications... 00:05:04.846 12:16:37 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:04.846 12:16:37 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:04.846 12:16:37 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:04.846 12:16:37 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:06.746 Calling clear_iscsi_subsystem 00:05:06.746 Calling clear_nvmf_subsystem 00:05:06.746 Calling clear_nbd_subsystem 00:05:06.746 Calling clear_ublk_subsystem 00:05:06.746 Calling clear_vhost_blk_subsystem 00:05:06.746 Calling clear_vhost_scsi_subsystem 00:05:06.746 Calling clear_bdev_subsystem 00:05:06.746 12:16:39 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:06.746 12:16:39 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:06.746 12:16:39 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:06.746 12:16:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.746 12:16:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:06.746 12:16:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:07.005 12:16:39 json_config -- json_config/json_config.sh@352 -- # break 00:05:07.005 12:16:39 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:07.005 12:16:39 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:07.005 12:16:39 json_config -- json_config/common.sh@31 -- # local app=target 00:05:07.005 12:16:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:07.005 12:16:39 json_config -- json_config/common.sh@35 -- # [[ -n 486240 ]] 00:05:07.005 12:16:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 486240 00:05:07.005 12:16:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:07.005 12:16:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.005 12:16:39 json_config -- json_config/common.sh@41 -- # kill -0 486240 00:05:07.005 12:16:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.574 12:16:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.574 12:16:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.574 12:16:40 json_config -- json_config/common.sh@41 -- # kill -0 486240 00:05:07.574 12:16:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.574 12:16:40 json_config -- json_config/common.sh@43 -- # break 00:05:07.574 12:16:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.574 12:16:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.574 SPDK target shutdown done 00:05:07.574 12:16:40 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:07.574 INFO: relaunching applications... 00:05:07.574 12:16:40 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.574 12:16:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.574 12:16:40 json_config -- json_config/common.sh@10 -- # shift 00:05:07.574 12:16:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.574 12:16:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.574 12:16:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.574 12:16:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.574 12:16:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.574 12:16:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=487535 00:05:07.574 12:16:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.574 12:16:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.574 Waiting for target to run... 00:05:07.574 12:16:40 json_config -- json_config/common.sh@25 -- # waitforlisten 487535 /var/tmp/spdk_tgt.sock 00:05:07.574 12:16:40 json_config -- common/autotest_common.sh@833 -- # '[' -z 487535 ']' 00:05:07.574 12:16:40 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.574 12:16:40 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.574 12:16:40 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.574 12:16:40 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.574 12:16:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.574 [2024-10-30 12:16:40.134182] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:07.574 [2024-10-30 12:16:40.134287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487535 ] 00:05:08.142 [2024-10-30 12:16:40.657179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.142 [2024-10-30 12:16:40.708297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.432 [2024-10-30 12:16:43.757793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.432 [2024-10-30 12:16:43.790250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.432 12:16:43 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:11.432 12:16:43 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:11.432 12:16:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:11.432 00:05:11.432 12:16:43 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:11.432 12:16:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:11.432 INFO: Checking if target configuration is the same... 00:05:11.433 12:16:43 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.433 12:16:43 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:11.433 12:16:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.433 + '[' 2 -ne 2 ']' 00:05:11.433 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:11.433 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:11.433 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.433 +++ basename /dev/fd/62 00:05:11.433 ++ mktemp /tmp/62.XXX 00:05:11.433 + tmp_file_1=/tmp/62.4ly 00:05:11.433 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.433 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.433 + tmp_file_2=/tmp/spdk_tgt_config.json.oHo 00:05:11.433 + ret=0 00:05:11.433 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.692 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.692 + diff -u /tmp/62.4ly /tmp/spdk_tgt_config.json.oHo 00:05:11.692 + echo 'INFO: JSON config files are the same' 00:05:11.692 INFO: JSON config files are the same 00:05:11.692 + rm /tmp/62.4ly /tmp/spdk_tgt_config.json.oHo 00:05:11.692 + exit 0 00:05:11.692 12:16:44 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:11.692 12:16:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:11.692 INFO: changing configuration and checking if this can be detected... 00:05:11.692 12:16:44 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.692 12:16:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.950 12:16:44 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.950 12:16:44 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:11.950 12:16:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.950 + '[' 2 -ne 2 ']' 00:05:11.950 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:11.950 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:11.950 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.950 +++ basename /dev/fd/62 00:05:11.950 ++ mktemp /tmp/62.XXX 00:05:11.950 + tmp_file_1=/tmp/62.vRX 00:05:11.950 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.950 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.950 + tmp_file_2=/tmp/spdk_tgt_config.json.vSZ 00:05:11.950 + ret=0 00:05:11.950 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.518 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.518 + diff -u /tmp/62.vRX /tmp/spdk_tgt_config.json.vSZ 00:05:12.518 + ret=1 00:05:12.518 + echo '=== Start of file: /tmp/62.vRX ===' 00:05:12.518 + cat /tmp/62.vRX 00:05:12.518 + echo '=== End of file: /tmp/62.vRX ===' 00:05:12.518 + echo '' 00:05:12.518 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vSZ ===' 00:05:12.518 + cat /tmp/spdk_tgt_config.json.vSZ 00:05:12.518 + echo '=== End of file: /tmp/spdk_tgt_config.json.vSZ ===' 00:05:12.518 + echo '' 00:05:12.518 + rm /tmp/62.vRX /tmp/spdk_tgt_config.json.vSZ 00:05:12.518 + exit 1 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:12.518 INFO: configuration change detected. 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@324 -- # [[ -n 487535 ]] 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.518 12:16:45 json_config -- json_config/json_config.sh@330 -- # killprocess 487535 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@952 -- # '[' -z 487535 ']' 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@956 -- # kill -0 487535 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@957 -- # uname 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 487535 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 487535' 00:05:12.518 killing process with pid 487535 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@971 -- # kill 487535 00:05:12.518 12:16:45 json_config -- common/autotest_common.sh@976 -- # wait 487535 00:05:14.419 12:16:46 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.419 12:16:46 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:14.419 12:16:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.419 12:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.419 12:16:46 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:14.419 12:16:46 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:14.419 INFO: Success 00:05:14.419 00:05:14.419 real 0m16.533s 00:05:14.419 user 0m18.205s 00:05:14.419 sys 0m2.561s 00:05:14.419 12:16:46 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.419 12:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.419 ************************************ 00:05:14.419 END TEST json_config 00:05:14.419 ************************************ 00:05:14.419 12:16:46 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.419 12:16:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.419 12:16:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.419 12:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:14.419 ************************************ 00:05:14.419 START TEST json_config_extra_key 00:05:14.419 ************************************ 00:05:14.419 12:16:46 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.419 12:16:46 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.419 12:16:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.419 12:16:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.419 12:16:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:14.419 12:16:46 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.419 12:16:46 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.419 --rc genhtml_branch_coverage=1 00:05:14.419 --rc genhtml_function_coverage=1 00:05:14.419 --rc genhtml_legend=1 00:05:14.419 --rc geninfo_all_blocks=1 00:05:14.419 --rc geninfo_unexecuted_blocks=1 00:05:14.419 00:05:14.419 ' 00:05:14.419 12:16:46 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.419 --rc genhtml_branch_coverage=1 00:05:14.419 --rc genhtml_function_coverage=1 00:05:14.419 --rc genhtml_legend=1 00:05:14.419 --rc geninfo_all_blocks=1 00:05:14.419 --rc geninfo_unexecuted_blocks=1 00:05:14.419 00:05:14.419 ' 00:05:14.419 12:16:46 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.419 --rc genhtml_branch_coverage=1 00:05:14.419 --rc genhtml_function_coverage=1 00:05:14.419 --rc genhtml_legend=1 00:05:14.419 --rc geninfo_all_blocks=1 00:05:14.419 --rc geninfo_unexecuted_blocks=1 00:05:14.419 00:05:14.419 ' 00:05:14.419 12:16:46 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.419 --rc genhtml_branch_coverage=1 00:05:14.419 --rc genhtml_function_coverage=1 00:05:14.419 --rc genhtml_legend=1 00:05:14.419 --rc geninfo_all_blocks=1 00:05:14.419 --rc geninfo_unexecuted_blocks=1 00:05:14.419 00:05:14.419 ' 00:05:14.419 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.419 12:16:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.419 12:16:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.419 12:16:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.419 12:16:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.419 12:16:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.419 12:16:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:14.420 12:16:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.420 12:16:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:14.420 12:16:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:14.420 12:16:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:14.420 12:16:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.420 12:16:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.420 12:16:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.420 12:16:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:14.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:14.420 12:16:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:14.420 12:16:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:14.420 12:16:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:14.420 INFO: launching applications... 00:05:14.420 12:16:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=488450 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.420 Waiting for target to run... 00:05:14.420 12:16:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 488450 /var/tmp/spdk_tgt.sock 00:05:14.420 12:16:46 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 488450 ']' 00:05:14.420 12:16:46 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.420 12:16:46 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.420 12:16:46 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.420 12:16:46 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.420 12:16:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 [2024-10-30 12:16:46.946908] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:14.420 [2024-10-30 12:16:46.947004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488450 ] 00:05:14.988 [2024-10-30 12:16:47.454308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.988 [2024-10-30 12:16:47.505430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.246 12:16:47 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.246 12:16:47 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:15.246 12:16:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:15.246 00:05:15.504 12:16:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:15.504 INFO: shutting down applications... 00:05:15.504 12:16:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:15.504 12:16:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:15.504 12:16:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.504 12:16:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 488450 ]] 00:05:15.504 12:16:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 488450 00:05:15.504 12:16:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.504 12:16:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.504 12:16:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 488450 00:05:15.504 12:16:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.763 12:16:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.763 12:16:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.763 12:16:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 488450 00:05:15.763 12:16:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.763 12:16:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:15.763 12:16:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.763 12:16:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.763 SPDK target shutdown done 00:05:15.763 12:16:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:15.763 Success 00:05:15.763 00:05:15.763 real 0m1.683s 00:05:15.763 user 0m1.526s 00:05:15.763 sys 0m0.616s 00:05:15.763 12:16:48 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.763 12:16:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.763 ************************************ 00:05:15.763 END TEST json_config_extra_key 00:05:15.763 ************************************ 00:05:16.022 12:16:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.022 12:16:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.022 12:16:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.022 12:16:48 -- common/autotest_common.sh@10 -- # set +x 00:05:16.022 ************************************ 00:05:16.022 START TEST alias_rpc 00:05:16.022 ************************************ 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.022 * Looking for test storage... 00:05:16.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.022 12:16:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.022 --rc genhtml_branch_coverage=1 00:05:16.022 --rc genhtml_function_coverage=1 00:05:16.022 --rc genhtml_legend=1 00:05:16.022 --rc geninfo_all_blocks=1 00:05:16.022 --rc geninfo_unexecuted_blocks=1 00:05:16.022 00:05:16.022 ' 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.022 --rc genhtml_branch_coverage=1 00:05:16.022 --rc genhtml_function_coverage=1 00:05:16.022 --rc genhtml_legend=1 00:05:16.022 --rc geninfo_all_blocks=1 00:05:16.022 --rc geninfo_unexecuted_blocks=1 00:05:16.022 00:05:16.022 ' 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.022 --rc genhtml_branch_coverage=1 00:05:16.022 --rc genhtml_function_coverage=1 00:05:16.022 --rc genhtml_legend=1 00:05:16.022 --rc geninfo_all_blocks=1 00:05:16.022 --rc geninfo_unexecuted_blocks=1 00:05:16.022 00:05:16.022 ' 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.022 --rc genhtml_branch_coverage=1 00:05:16.022 --rc genhtml_function_coverage=1 00:05:16.022 --rc genhtml_legend=1 00:05:16.022 --rc geninfo_all_blocks=1 00:05:16.022 --rc geninfo_unexecuted_blocks=1 00:05:16.022 00:05:16.022 ' 00:05:16.022 12:16:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.022 12:16:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=488768 00:05:16.022 12:16:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.022 12:16:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 488768 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 488768 ']' 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.022 12:16:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.022 [2024-10-30 12:16:48.692689] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:16.022 [2024-10-30 12:16:48.692791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488768 ] 00:05:16.280 [2024-10-30 12:16:48.759240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.280 [2024-10-30 12:16:48.814906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.539 12:16:49 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.539 12:16:49 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:16.539 12:16:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:16.797 12:16:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 488768 00:05:16.797 12:16:49 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 488768 ']' 00:05:16.797 12:16:49 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 488768 00:05:16.797 12:16:49 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:16.797 12:16:49 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.797 12:16:49 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 488768 00:05:16.797 12:16:49 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.797 12:16:49 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.797 12:16:49 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 488768' 00:05:16.797 killing process with pid 488768 00:05:16.797 12:16:49 alias_rpc -- common/autotest_common.sh@971 -- # kill 488768 00:05:16.797 12:16:49 alias_rpc -- common/autotest_common.sh@976 -- # wait 488768 00:05:17.362 00:05:17.362 real 0m1.318s 00:05:17.362 user 0m1.438s 00:05:17.362 sys 0m0.434s 00:05:17.362 12:16:49 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.362 12:16:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.362 ************************************ 00:05:17.362 END TEST alias_rpc 00:05:17.362 ************************************ 00:05:17.362 12:16:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:17.362 12:16:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.362 12:16:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.362 12:16:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.362 12:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:17.362 ************************************ 00:05:17.362 START TEST spdkcli_tcp 00:05:17.362 ************************************ 00:05:17.362 12:16:49 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.362 * Looking for test storage... 00:05:17.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:17.362 12:16:49 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.362 12:16:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.362 12:16:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.362 12:16:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.362 12:16:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:17.362 12:16:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.362 12:16:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.362 --rc genhtml_branch_coverage=1 00:05:17.362 --rc genhtml_function_coverage=1 00:05:17.362 --rc genhtml_legend=1 00:05:17.362 --rc geninfo_all_blocks=1 00:05:17.362 --rc geninfo_unexecuted_blocks=1 00:05:17.362 00:05:17.362 ' 00:05:17.362 12:16:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.362 --rc genhtml_branch_coverage=1 00:05:17.362 --rc genhtml_function_coverage=1 00:05:17.362 --rc genhtml_legend=1 00:05:17.362 --rc geninfo_all_blocks=1 00:05:17.362 --rc geninfo_unexecuted_blocks=1 00:05:17.362 00:05:17.362 ' 00:05:17.362 12:16:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.362 --rc genhtml_branch_coverage=1 00:05:17.362 --rc genhtml_function_coverage=1 00:05:17.362 --rc genhtml_legend=1 00:05:17.362 --rc geninfo_all_blocks=1 00:05:17.362 --rc geninfo_unexecuted_blocks=1 00:05:17.362 00:05:17.362 ' 00:05:17.362 12:16:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.362 --rc genhtml_branch_coverage=1 00:05:17.362 --rc genhtml_function_coverage=1 00:05:17.362 --rc genhtml_legend=1 00:05:17.362 --rc geninfo_all_blocks=1 00:05:17.362 --rc geninfo_unexecuted_blocks=1 00:05:17.362 00:05:17.362 ' 00:05:17.362 12:16:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:17.362 12:16:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:17.362 12:16:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:17.362 12:16:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:17.362 12:16:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:17.362 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:17.362 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:17.362 12:16:50 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.362 12:16:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.362 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=488966 00:05:17.362 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:17.362 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 488966 00:05:17.362 12:16:50 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 488966 ']' 00:05:17.362 12:16:50 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.362 12:16:50 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.362 12:16:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.362 12:16:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.362 12:16:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.620 [2024-10-30 12:16:50.062777] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:17.620 [2024-10-30 12:16:50.062857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488966 ] 00:05:17.620 [2024-10-30 12:16:50.128710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.620 [2024-10-30 12:16:50.185675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.620 [2024-10-30 12:16:50.185680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.878 12:16:50 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.878 12:16:50 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:17.878 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=488980 00:05:17.878 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:17.878 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.137 [ 00:05:18.137 "bdev_malloc_delete", 00:05:18.137 "bdev_malloc_create", 00:05:18.137 "bdev_null_resize", 00:05:18.137 "bdev_null_delete", 00:05:18.137 "bdev_null_create", 00:05:18.137 "bdev_nvme_cuse_unregister", 00:05:18.137 "bdev_nvme_cuse_register", 00:05:18.137 "bdev_opal_new_user", 00:05:18.137 "bdev_opal_set_lock_state", 00:05:18.137 "bdev_opal_delete", 00:05:18.137 "bdev_opal_get_info", 00:05:18.137 "bdev_opal_create", 00:05:18.137 "bdev_nvme_opal_revert", 00:05:18.137 "bdev_nvme_opal_init", 00:05:18.137 "bdev_nvme_send_cmd", 00:05:18.137 "bdev_nvme_set_keys", 00:05:18.137 "bdev_nvme_get_path_iostat", 00:05:18.137 "bdev_nvme_get_mdns_discovery_info", 00:05:18.137 "bdev_nvme_stop_mdns_discovery", 00:05:18.137 "bdev_nvme_start_mdns_discovery", 00:05:18.137 "bdev_nvme_set_multipath_policy", 00:05:18.137 "bdev_nvme_set_preferred_path", 00:05:18.137 "bdev_nvme_get_io_paths", 00:05:18.137 "bdev_nvme_remove_error_injection", 00:05:18.137 "bdev_nvme_add_error_injection", 00:05:18.137 "bdev_nvme_get_discovery_info", 00:05:18.137 "bdev_nvme_stop_discovery", 00:05:18.137 "bdev_nvme_start_discovery", 00:05:18.137 "bdev_nvme_get_controller_health_info", 00:05:18.137 "bdev_nvme_disable_controller", 00:05:18.137 "bdev_nvme_enable_controller", 00:05:18.137 "bdev_nvme_reset_controller", 00:05:18.137 "bdev_nvme_get_transport_statistics", 00:05:18.137 "bdev_nvme_apply_firmware", 00:05:18.137 "bdev_nvme_detach_controller", 00:05:18.137 "bdev_nvme_get_controllers", 00:05:18.137 "bdev_nvme_attach_controller", 00:05:18.137 "bdev_nvme_set_hotplug", 00:05:18.137 "bdev_nvme_set_options", 00:05:18.137 "bdev_passthru_delete", 00:05:18.137 "bdev_passthru_create", 00:05:18.137 "bdev_lvol_set_parent_bdev", 00:05:18.137 "bdev_lvol_set_parent", 00:05:18.137 "bdev_lvol_check_shallow_copy", 00:05:18.137 "bdev_lvol_start_shallow_copy", 00:05:18.137 "bdev_lvol_grow_lvstore", 00:05:18.137 "bdev_lvol_get_lvols", 00:05:18.137 "bdev_lvol_get_lvstores", 00:05:18.137 "bdev_lvol_delete", 00:05:18.137 "bdev_lvol_set_read_only", 00:05:18.137 "bdev_lvol_resize", 00:05:18.137 "bdev_lvol_decouple_parent", 00:05:18.137 "bdev_lvol_inflate", 00:05:18.137 "bdev_lvol_rename", 00:05:18.137 "bdev_lvol_clone_bdev", 00:05:18.137 "bdev_lvol_clone", 00:05:18.137 "bdev_lvol_snapshot", 00:05:18.137 "bdev_lvol_create", 00:05:18.137 "bdev_lvol_delete_lvstore", 00:05:18.137 "bdev_lvol_rename_lvstore", 00:05:18.137 "bdev_lvol_create_lvstore", 00:05:18.137 "bdev_raid_set_options", 00:05:18.137 "bdev_raid_remove_base_bdev", 00:05:18.137 "bdev_raid_add_base_bdev", 00:05:18.137 "bdev_raid_delete", 00:05:18.137 "bdev_raid_create", 00:05:18.137 "bdev_raid_get_bdevs", 00:05:18.137 "bdev_error_inject_error", 00:05:18.137 "bdev_error_delete", 00:05:18.137 "bdev_error_create", 00:05:18.137 "bdev_split_delete", 00:05:18.137 "bdev_split_create", 00:05:18.137 "bdev_delay_delete", 00:05:18.137 "bdev_delay_create", 00:05:18.137 "bdev_delay_update_latency", 00:05:18.137 "bdev_zone_block_delete", 00:05:18.137 "bdev_zone_block_create", 00:05:18.137 "blobfs_create", 00:05:18.137 "blobfs_detect", 00:05:18.137 "blobfs_set_cache_size", 00:05:18.137 "bdev_aio_delete", 00:05:18.137 "bdev_aio_rescan", 00:05:18.137 "bdev_aio_create", 00:05:18.137 "bdev_ftl_set_property", 00:05:18.137 "bdev_ftl_get_properties", 00:05:18.137 "bdev_ftl_get_stats", 00:05:18.137 "bdev_ftl_unmap", 00:05:18.137 "bdev_ftl_unload", 00:05:18.137 "bdev_ftl_delete", 00:05:18.137 "bdev_ftl_load", 00:05:18.137 "bdev_ftl_create", 00:05:18.137 "bdev_virtio_attach_controller", 00:05:18.137 "bdev_virtio_scsi_get_devices", 00:05:18.137 "bdev_virtio_detach_controller", 00:05:18.137 "bdev_virtio_blk_set_hotplug", 00:05:18.137 "bdev_iscsi_delete", 00:05:18.137 "bdev_iscsi_create", 00:05:18.137 "bdev_iscsi_set_options", 00:05:18.137 "accel_error_inject_error", 00:05:18.137 "ioat_scan_accel_module", 00:05:18.137 "dsa_scan_accel_module", 00:05:18.137 "iaa_scan_accel_module", 00:05:18.137 "vfu_virtio_create_fs_endpoint", 00:05:18.137 "vfu_virtio_create_scsi_endpoint", 00:05:18.137 "vfu_virtio_scsi_remove_target", 00:05:18.137 "vfu_virtio_scsi_add_target", 00:05:18.137 "vfu_virtio_create_blk_endpoint", 00:05:18.137 "vfu_virtio_delete_endpoint", 00:05:18.137 "keyring_file_remove_key", 00:05:18.137 "keyring_file_add_key", 00:05:18.137 "keyring_linux_set_options", 00:05:18.137 "fsdev_aio_delete", 00:05:18.137 "fsdev_aio_create", 00:05:18.137 "iscsi_get_histogram", 00:05:18.137 "iscsi_enable_histogram", 00:05:18.137 "iscsi_set_options", 00:05:18.137 "iscsi_get_auth_groups", 00:05:18.137 "iscsi_auth_group_remove_secret", 00:05:18.137 "iscsi_auth_group_add_secret", 00:05:18.137 "iscsi_delete_auth_group", 00:05:18.137 "iscsi_create_auth_group", 00:05:18.138 "iscsi_set_discovery_auth", 00:05:18.138 "iscsi_get_options", 00:05:18.138 "iscsi_target_node_request_logout", 00:05:18.138 "iscsi_target_node_set_redirect", 00:05:18.138 "iscsi_target_node_set_auth", 00:05:18.138 "iscsi_target_node_add_lun", 00:05:18.138 "iscsi_get_stats", 00:05:18.138 "iscsi_get_connections", 00:05:18.138 "iscsi_portal_group_set_auth", 00:05:18.138 "iscsi_start_portal_group", 00:05:18.138 "iscsi_delete_portal_group", 00:05:18.138 "iscsi_create_portal_group", 00:05:18.138 "iscsi_get_portal_groups", 00:05:18.138 "iscsi_delete_target_node", 00:05:18.138 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.138 "iscsi_target_node_add_pg_ig_maps", 00:05:18.138 "iscsi_create_target_node", 00:05:18.138 "iscsi_get_target_nodes", 00:05:18.138 "iscsi_delete_initiator_group", 00:05:18.138 "iscsi_initiator_group_remove_initiators", 00:05:18.138 "iscsi_initiator_group_add_initiators", 00:05:18.138 "iscsi_create_initiator_group", 00:05:18.138 "iscsi_get_initiator_groups", 00:05:18.138 "nvmf_set_crdt", 00:05:18.138 "nvmf_set_config", 00:05:18.138 "nvmf_set_max_subsystems", 00:05:18.138 "nvmf_stop_mdns_prr", 00:05:18.138 "nvmf_publish_mdns_prr", 00:05:18.138 "nvmf_subsystem_get_listeners", 00:05:18.138 "nvmf_subsystem_get_qpairs", 00:05:18.138 "nvmf_subsystem_get_controllers", 00:05:18.138 "nvmf_get_stats", 00:05:18.138 "nvmf_get_transports", 00:05:18.138 "nvmf_create_transport", 00:05:18.138 "nvmf_get_targets", 00:05:18.138 "nvmf_delete_target", 00:05:18.138 "nvmf_create_target", 00:05:18.138 "nvmf_subsystem_allow_any_host", 00:05:18.138 "nvmf_subsystem_set_keys", 00:05:18.138 "nvmf_subsystem_remove_host", 00:05:18.138 "nvmf_subsystem_add_host", 00:05:18.138 "nvmf_ns_remove_host", 00:05:18.138 "nvmf_ns_add_host", 00:05:18.138 "nvmf_subsystem_remove_ns", 00:05:18.138 "nvmf_subsystem_set_ns_ana_group", 00:05:18.138 "nvmf_subsystem_add_ns", 00:05:18.138 "nvmf_subsystem_listener_set_ana_state", 00:05:18.138 "nvmf_discovery_get_referrals", 00:05:18.138 "nvmf_discovery_remove_referral", 00:05:18.138 "nvmf_discovery_add_referral", 00:05:18.138 "nvmf_subsystem_remove_listener", 00:05:18.138 "nvmf_subsystem_add_listener", 00:05:18.138 "nvmf_delete_subsystem", 00:05:18.138 "nvmf_create_subsystem", 00:05:18.138 "nvmf_get_subsystems", 00:05:18.138 "env_dpdk_get_mem_stats", 00:05:18.138 "nbd_get_disks", 00:05:18.138 "nbd_stop_disk", 00:05:18.138 "nbd_start_disk", 00:05:18.138 "ublk_recover_disk", 00:05:18.138 "ublk_get_disks", 00:05:18.138 "ublk_stop_disk", 00:05:18.138 "ublk_start_disk", 00:05:18.138 "ublk_destroy_target", 00:05:18.138 "ublk_create_target", 00:05:18.138 "virtio_blk_create_transport", 00:05:18.138 "virtio_blk_get_transports", 00:05:18.138 "vhost_controller_set_coalescing", 00:05:18.138 "vhost_get_controllers", 00:05:18.138 "vhost_delete_controller", 00:05:18.138 "vhost_create_blk_controller", 00:05:18.138 "vhost_scsi_controller_remove_target", 00:05:18.138 "vhost_scsi_controller_add_target", 00:05:18.138 "vhost_start_scsi_controller", 00:05:18.138 "vhost_create_scsi_controller", 00:05:18.138 "thread_set_cpumask", 00:05:18.138 "scheduler_set_options", 00:05:18.138 "framework_get_governor", 00:05:18.138 "framework_get_scheduler", 00:05:18.138 "framework_set_scheduler", 00:05:18.138 "framework_get_reactors", 00:05:18.138 "thread_get_io_channels", 00:05:18.138 "thread_get_pollers", 00:05:18.138 "thread_get_stats", 00:05:18.138 "framework_monitor_context_switch", 00:05:18.138 "spdk_kill_instance", 00:05:18.138 "log_enable_timestamps", 00:05:18.138 "log_get_flags", 00:05:18.138 "log_clear_flag", 00:05:18.138 "log_set_flag", 00:05:18.138 "log_get_level", 00:05:18.138 "log_set_level", 00:05:18.138 "log_get_print_level", 00:05:18.138 "log_set_print_level", 00:05:18.138 "framework_enable_cpumask_locks", 00:05:18.138 "framework_disable_cpumask_locks", 00:05:18.138 "framework_wait_init", 00:05:18.138 "framework_start_init", 00:05:18.138 "scsi_get_devices", 00:05:18.138 "bdev_get_histogram", 00:05:18.138 "bdev_enable_histogram", 00:05:18.138 "bdev_set_qos_limit", 00:05:18.138 "bdev_set_qd_sampling_period", 00:05:18.138 "bdev_get_bdevs", 00:05:18.138 "bdev_reset_iostat", 00:05:18.138 "bdev_get_iostat", 00:05:18.138 "bdev_examine", 00:05:18.138 "bdev_wait_for_examine", 00:05:18.138 "bdev_set_options", 00:05:18.138 "accel_get_stats", 00:05:18.138 "accel_set_options", 00:05:18.138 "accel_set_driver", 00:05:18.138 "accel_crypto_key_destroy", 00:05:18.138 "accel_crypto_keys_get", 00:05:18.138 "accel_crypto_key_create", 00:05:18.138 "accel_assign_opc", 00:05:18.138 "accel_get_module_info", 00:05:18.138 "accel_get_opc_assignments", 00:05:18.138 "vmd_rescan", 00:05:18.138 "vmd_remove_device", 00:05:18.138 "vmd_enable", 00:05:18.138 "sock_get_default_impl", 00:05:18.138 "sock_set_default_impl", 00:05:18.138 "sock_impl_set_options", 00:05:18.138 "sock_impl_get_options", 00:05:18.138 "iobuf_get_stats", 00:05:18.138 "iobuf_set_options", 00:05:18.138 "keyring_get_keys", 00:05:18.138 "vfu_tgt_set_base_path", 00:05:18.138 "framework_get_pci_devices", 00:05:18.138 "framework_get_config", 00:05:18.138 "framework_get_subsystems", 00:05:18.138 "fsdev_set_opts", 00:05:18.138 "fsdev_get_opts", 00:05:18.138 "trace_get_info", 00:05:18.138 "trace_get_tpoint_group_mask", 00:05:18.138 "trace_disable_tpoint_group", 00:05:18.138 "trace_enable_tpoint_group", 00:05:18.138 "trace_clear_tpoint_mask", 00:05:18.138 "trace_set_tpoint_mask", 00:05:18.138 "notify_get_notifications", 00:05:18.138 "notify_get_types", 00:05:18.138 "spdk_get_version", 00:05:18.138 "rpc_get_methods" 00:05:18.138 ] 00:05:18.138 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.138 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.138 12:16:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 488966 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 488966 ']' 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 488966 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 488966 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 488966' 00:05:18.138 killing process with pid 488966 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 488966 00:05:18.138 12:16:50 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 488966 00:05:18.703 00:05:18.703 real 0m1.374s 00:05:18.703 user 0m2.464s 00:05:18.703 sys 0m0.487s 00:05:18.703 12:16:51 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.703 12:16:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.703 ************************************ 00:05:18.703 END TEST spdkcli_tcp 00:05:18.703 ************************************ 00:05:18.703 12:16:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.703 12:16:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.703 12:16:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.703 12:16:51 -- common/autotest_common.sh@10 -- # set +x 00:05:18.703 ************************************ 00:05:18.703 START TEST dpdk_mem_utility 00:05:18.703 ************************************ 00:05:18.703 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:18.703 * Looking for test storage... 00:05:18.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:18.703 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:18.703 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:18.703 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:18.961 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.961 12:16:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:18.961 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.961 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.961 --rc genhtml_branch_coverage=1 00:05:18.961 --rc genhtml_function_coverage=1 00:05:18.961 --rc genhtml_legend=1 00:05:18.961 --rc geninfo_all_blocks=1 00:05:18.961 --rc geninfo_unexecuted_blocks=1 00:05:18.961 00:05:18.961 ' 00:05:18.961 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.961 --rc genhtml_branch_coverage=1 00:05:18.961 --rc genhtml_function_coverage=1 00:05:18.961 --rc genhtml_legend=1 00:05:18.961 --rc geninfo_all_blocks=1 00:05:18.961 --rc geninfo_unexecuted_blocks=1 00:05:18.962 00:05:18.962 ' 00:05:18.962 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.962 --rc genhtml_branch_coverage=1 00:05:18.962 --rc genhtml_function_coverage=1 00:05:18.962 --rc genhtml_legend=1 00:05:18.962 --rc geninfo_all_blocks=1 00:05:18.962 --rc geninfo_unexecuted_blocks=1 00:05:18.962 00:05:18.962 ' 00:05:18.962 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.962 --rc genhtml_branch_coverage=1 00:05:18.962 --rc genhtml_function_coverage=1 00:05:18.962 --rc genhtml_legend=1 00:05:18.962 --rc geninfo_all_blocks=1 00:05:18.962 --rc geninfo_unexecuted_blocks=1 00:05:18.962 00:05:18.962 ' 00:05:18.962 12:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:18.962 12:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=489182 00:05:18.962 12:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.962 12:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 489182 00:05:18.962 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 489182 ']' 00:05:18.962 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.962 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.962 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.962 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.962 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.962 [2024-10-30 12:16:51.480988] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:18.962 [2024-10-30 12:16:51.481064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489182 ] 00:05:18.962 [2024-10-30 12:16:51.543362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.962 [2024-10-30 12:16:51.602097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.220 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.220 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:19.220 12:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:19.220 12:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:19.220 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.220 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.220 { 00:05:19.220 "filename": "/tmp/spdk_mem_dump.txt" 00:05:19.220 } 00:05:19.220 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.220 12:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.477 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:19.477 1 heaps totaling size 810.000000 MiB 00:05:19.477 size: 810.000000 MiB heap id: 0 00:05:19.477 end heaps---------- 00:05:19.477 9 mempools totaling size 595.772034 MiB 00:05:19.477 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:19.477 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:19.477 size: 92.545471 MiB name: bdev_io_489182 00:05:19.477 size: 50.003479 MiB name: msgpool_489182 00:05:19.477 size: 36.509338 MiB name: fsdev_io_489182 00:05:19.477 size: 21.763794 MiB name: PDU_Pool 00:05:19.477 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:19.477 size: 4.133484 MiB name: evtpool_489182 00:05:19.477 size: 0.026123 MiB name: Session_Pool 00:05:19.477 end mempools------- 00:05:19.477 6 memzones totaling size 4.142822 MiB 00:05:19.477 size: 1.000366 MiB name: RG_ring_0_489182 00:05:19.477 size: 1.000366 MiB name: RG_ring_1_489182 00:05:19.477 size: 1.000366 MiB name: RG_ring_4_489182 00:05:19.477 size: 1.000366 MiB name: RG_ring_5_489182 00:05:19.477 size: 0.125366 MiB name: RG_ring_2_489182 00:05:19.477 size: 0.015991 MiB name: RG_ring_3_489182 00:05:19.477 end memzones------- 00:05:19.477 12:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:19.477 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:19.477 list of free elements. size: 10.862488 MiB 00:05:19.477 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:19.477 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:19.477 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:19.477 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:19.477 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:19.477 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:19.477 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:19.477 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:19.477 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:19.477 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:19.477 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:19.477 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:19.477 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:19.477 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:19.477 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:19.477 list of standard malloc elements. size: 199.218628 MiB 00:05:19.477 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:19.477 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:19.477 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:19.477 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:19.477 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:19.477 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:19.477 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:19.477 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:19.477 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:19.477 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:19.477 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:19.477 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:19.477 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:19.477 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:19.477 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:19.477 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:19.477 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:19.477 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:19.477 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:19.477 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:19.477 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:19.477 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:19.477 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:19.477 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:19.477 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:19.477 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:19.477 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:19.477 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:19.477 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:19.477 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:19.477 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:19.477 list of memzone associated elements. size: 599.918884 MiB 00:05:19.477 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:19.477 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:19.477 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:19.478 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:19.478 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:19.478 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_489182_0 00:05:19.478 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:19.478 associated memzone info: size: 48.002930 MiB name: MP_msgpool_489182_0 00:05:19.478 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:19.478 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_489182_0 00:05:19.478 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:19.478 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:19.478 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:19.478 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:19.478 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:19.478 associated memzone info: size: 3.000122 MiB name: MP_evtpool_489182_0 00:05:19.478 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:19.478 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_489182 00:05:19.478 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:19.478 associated memzone info: size: 1.007996 MiB name: MP_evtpool_489182 00:05:19.478 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:19.478 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:19.478 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:19.478 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:19.478 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:19.478 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:19.478 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:19.478 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:19.478 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:19.478 associated memzone info: size: 1.000366 MiB name: RG_ring_0_489182 00:05:19.478 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:19.478 associated memzone info: size: 1.000366 MiB name: RG_ring_1_489182 00:05:19.478 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:19.478 associated memzone info: size: 1.000366 MiB name: RG_ring_4_489182 00:05:19.478 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:19.478 associated memzone info: size: 1.000366 MiB name: RG_ring_5_489182 00:05:19.478 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:19.478 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_489182 00:05:19.478 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:19.478 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_489182 00:05:19.478 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:19.478 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:19.478 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:19.478 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:19.478 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:19.478 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:19.478 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:19.478 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_489182 00:05:19.478 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:19.478 associated memzone info: size: 0.125366 MiB name: RG_ring_2_489182 00:05:19.478 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:19.478 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:19.478 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:19.478 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:19.478 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:19.478 associated memzone info: size: 0.015991 MiB name: RG_ring_3_489182 00:05:19.478 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:19.478 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:19.478 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:19.478 associated memzone info: size: 0.000183 MiB name: MP_msgpool_489182 00:05:19.478 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:19.478 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_489182 00:05:19.478 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:19.478 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_489182 00:05:19.478 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:19.478 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:19.478 12:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:19.478 12:16:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 489182 00:05:19.478 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 489182 ']' 00:05:19.478 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 489182 00:05:19.478 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:19.478 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:19.478 12:16:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 489182 00:05:19.478 12:16:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:19.478 12:16:52 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:19.478 12:16:52 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 489182' 00:05:19.478 killing process with pid 489182 00:05:19.478 12:16:52 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 489182 00:05:19.478 12:16:52 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 489182 00:05:20.044 00:05:20.044 real 0m1.159s 00:05:20.044 user 0m1.117s 00:05:20.044 sys 0m0.444s 00:05:20.044 12:16:52 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.044 12:16:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.044 ************************************ 00:05:20.044 END TEST dpdk_mem_utility 00:05:20.044 ************************************ 00:05:20.044 12:16:52 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.044 12:16:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.044 12:16:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.044 12:16:52 -- common/autotest_common.sh@10 -- # set +x 00:05:20.044 ************************************ 00:05:20.044 START TEST event 00:05:20.044 ************************************ 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:20.044 * Looking for test storage... 00:05:20.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:20.044 12:16:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.044 12:16:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.044 12:16:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.044 12:16:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.044 12:16:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.044 12:16:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.044 12:16:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.044 12:16:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.044 12:16:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.044 12:16:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.044 12:16:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.044 12:16:52 event -- scripts/common.sh@344 -- # case "$op" in 00:05:20.044 12:16:52 event -- scripts/common.sh@345 -- # : 1 00:05:20.044 12:16:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.044 12:16:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.044 12:16:52 event -- scripts/common.sh@365 -- # decimal 1 00:05:20.044 12:16:52 event -- scripts/common.sh@353 -- # local d=1 00:05:20.044 12:16:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.044 12:16:52 event -- scripts/common.sh@355 -- # echo 1 00:05:20.044 12:16:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.044 12:16:52 event -- scripts/common.sh@366 -- # decimal 2 00:05:20.044 12:16:52 event -- scripts/common.sh@353 -- # local d=2 00:05:20.044 12:16:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.044 12:16:52 event -- scripts/common.sh@355 -- # echo 2 00:05:20.044 12:16:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.044 12:16:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.044 12:16:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.044 12:16:52 event -- scripts/common.sh@368 -- # return 0 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:20.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.044 --rc genhtml_branch_coverage=1 00:05:20.044 --rc genhtml_function_coverage=1 00:05:20.044 --rc genhtml_legend=1 00:05:20.044 --rc geninfo_all_blocks=1 00:05:20.044 --rc geninfo_unexecuted_blocks=1 00:05:20.044 00:05:20.044 ' 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:20.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.044 --rc genhtml_branch_coverage=1 00:05:20.044 --rc genhtml_function_coverage=1 00:05:20.044 --rc genhtml_legend=1 00:05:20.044 --rc geninfo_all_blocks=1 00:05:20.044 --rc geninfo_unexecuted_blocks=1 00:05:20.044 00:05:20.044 ' 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:20.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.044 --rc genhtml_branch_coverage=1 00:05:20.044 --rc genhtml_function_coverage=1 00:05:20.044 --rc genhtml_legend=1 00:05:20.044 --rc geninfo_all_blocks=1 00:05:20.044 --rc geninfo_unexecuted_blocks=1 00:05:20.044 00:05:20.044 ' 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:20.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.044 --rc genhtml_branch_coverage=1 00:05:20.044 --rc genhtml_function_coverage=1 00:05:20.044 --rc genhtml_legend=1 00:05:20.044 --rc geninfo_all_blocks=1 00:05:20.044 --rc geninfo_unexecuted_blocks=1 00:05:20.044 00:05:20.044 ' 00:05:20.044 12:16:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:20.044 12:16:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.044 12:16:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:20.044 12:16:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.044 12:16:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.044 ************************************ 00:05:20.044 START TEST event_perf 00:05:20.044 ************************************ 00:05:20.044 12:16:52 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.044 Running I/O for 1 seconds...[2024-10-30 12:16:52.681308] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:20.044 [2024-10-30 12:16:52.681374] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489378 ] 00:05:20.301 [2024-10-30 12:16:52.748224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.301 [2024-10-30 12:16:52.812046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.301 [2024-10-30 12:16:52.812163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.301 [2024-10-30 12:16:52.812282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.301 [2024-10-30 12:16:52.812287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.232 Running I/O for 1 seconds... 00:05:21.232 lcore 0: 228086 00:05:21.232 lcore 1: 228087 00:05:21.232 lcore 2: 228087 00:05:21.232 lcore 3: 228088 00:05:21.232 done. 00:05:21.232 00:05:21.232 real 0m1.209s 00:05:21.232 user 0m4.136s 00:05:21.232 sys 0m0.069s 00:05:21.232 12:16:53 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.232 12:16:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.232 ************************************ 00:05:21.232 END TEST event_perf 00:05:21.232 ************************************ 00:05:21.232 12:16:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:21.232 12:16:53 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:21.232 12:16:53 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.232 12:16:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.490 ************************************ 00:05:21.490 START TEST event_reactor 00:05:21.490 ************************************ 00:05:21.490 12:16:53 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:21.490 [2024-10-30 12:16:53.937317] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:21.490 [2024-10-30 12:16:53.937382] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489648 ] 00:05:21.490 [2024-10-30 12:16:54.001969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.490 [2024-10-30 12:16:54.059349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.863 test_start 00:05:22.863 oneshot 00:05:22.863 tick 100 00:05:22.863 tick 100 00:05:22.863 tick 250 00:05:22.863 tick 100 00:05:22.863 tick 100 00:05:22.863 tick 100 00:05:22.863 tick 250 00:05:22.863 tick 500 00:05:22.863 tick 100 00:05:22.863 tick 100 00:05:22.863 tick 250 00:05:22.863 tick 100 00:05:22.863 tick 100 00:05:22.863 test_end 00:05:22.863 00:05:22.863 real 0m1.200s 00:05:22.863 user 0m1.132s 00:05:22.863 sys 0m0.064s 00:05:22.863 12:16:55 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.863 12:16:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:22.863 ************************************ 00:05:22.864 END TEST event_reactor 00:05:22.864 ************************************ 00:05:22.864 12:16:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.864 12:16:55 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:22.864 12:16:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.864 12:16:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.864 ************************************ 00:05:22.864 START TEST event_reactor_perf 00:05:22.864 ************************************ 00:05:22.864 12:16:55 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.864 [2024-10-30 12:16:55.185385] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:22.864 [2024-10-30 12:16:55.185452] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489805 ] 00:05:22.864 [2024-10-30 12:16:55.252165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.864 [2024-10-30 12:16:55.308028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.801 test_start 00:05:23.801 test_end 00:05:23.801 Performance: 444488 events per second 00:05:23.801 00:05:23.801 real 0m1.197s 00:05:23.801 user 0m1.132s 00:05:23.801 sys 0m0.061s 00:05:23.801 12:16:56 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.801 12:16:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.801 ************************************ 00:05:23.801 END TEST event_reactor_perf 00:05:23.801 ************************************ 00:05:23.801 12:16:56 event -- event/event.sh@49 -- # uname -s 00:05:23.801 12:16:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.801 12:16:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.801 12:16:56 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.801 12:16:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.801 12:16:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.801 ************************************ 00:05:23.801 START TEST event_scheduler 00:05:23.801 ************************************ 00:05:23.801 12:16:56 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:23.801 * Looking for test storage... 00:05:23.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:23.801 12:16:56 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.801 12:16:56 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.801 12:16:56 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.060 12:16:56 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:24.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.060 --rc genhtml_branch_coverage=1 00:05:24.060 --rc genhtml_function_coverage=1 00:05:24.060 --rc genhtml_legend=1 00:05:24.060 --rc geninfo_all_blocks=1 00:05:24.060 --rc geninfo_unexecuted_blocks=1 00:05:24.060 00:05:24.060 ' 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:24.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.060 --rc genhtml_branch_coverage=1 00:05:24.060 --rc genhtml_function_coverage=1 00:05:24.060 --rc genhtml_legend=1 00:05:24.060 --rc geninfo_all_blocks=1 00:05:24.060 --rc geninfo_unexecuted_blocks=1 00:05:24.060 00:05:24.060 ' 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:24.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.060 --rc genhtml_branch_coverage=1 00:05:24.060 --rc genhtml_function_coverage=1 00:05:24.060 --rc genhtml_legend=1 00:05:24.060 --rc geninfo_all_blocks=1 00:05:24.060 --rc geninfo_unexecuted_blocks=1 00:05:24.060 00:05:24.060 ' 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:24.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.060 --rc genhtml_branch_coverage=1 00:05:24.060 --rc genhtml_function_coverage=1 00:05:24.060 --rc genhtml_legend=1 00:05:24.060 --rc geninfo_all_blocks=1 00:05:24.060 --rc geninfo_unexecuted_blocks=1 00:05:24.060 00:05:24.060 ' 00:05:24.060 12:16:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.060 12:16:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=490004 00:05:24.060 12:16:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.060 12:16:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.060 12:16:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 490004 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 490004 ']' 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.060 12:16:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.060 [2024-10-30 12:16:56.594923] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:24.060 [2024-10-30 12:16:56.595000] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490004 ] 00:05:24.060 [2024-10-30 12:16:56.659657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.060 [2024-10-30 12:16:56.720742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.060 [2024-10-30 12:16:56.720807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.060 [2024-10-30 12:16:56.720875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.060 [2024-10-30 12:16:56.720878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:24.319 12:16:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 [2024-10-30 12:16:56.821895] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:24.319 [2024-10-30 12:16:56.821922] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:24.319 [2024-10-30 12:16:56.821939] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:24.319 [2024-10-30 12:16:56.821949] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:24.319 [2024-10-30 12:16:56.821959] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.319 12:16:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 [2024-10-30 12:16:56.921368] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.319 12:16:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.319 12:16:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 ************************************ 00:05:24.319 START TEST scheduler_create_thread 00:05:24.319 ************************************ 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 2 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 3 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 4 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 5 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 6 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.319 12:16:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.319 7 00:05:24.319 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.319 12:16:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:24.319 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.319 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.578 8 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.578 9 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.578 10 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.578 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.145 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.145 00:05:25.145 real 0m0.592s 00:05:25.145 user 0m0.009s 00:05:25.145 sys 0m0.005s 00:05:25.145 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.145 12:16:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.145 ************************************ 00:05:25.145 END TEST scheduler_create_thread 00:05:25.145 ************************************ 00:05:25.145 12:16:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:25.145 12:16:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 490004 00:05:25.145 12:16:57 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 490004 ']' 00:05:25.145 12:16:57 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 490004 00:05:25.145 12:16:57 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:25.145 12:16:57 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:25.145 12:16:57 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 490004 00:05:25.145 12:16:57 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:25.145 12:16:57 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:25.145 12:16:57 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 490004' 00:05:25.145 killing process with pid 490004 00:05:25.145 12:16:57 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 490004 00:05:25.145 12:16:57 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 490004 00:05:25.404 [2024-10-30 12:16:58.021941] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:25.662 00:05:25.662 real 0m1.824s 00:05:25.662 user 0m2.473s 00:05:25.662 sys 0m0.347s 00:05:25.662 12:16:58 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.662 12:16:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.662 ************************************ 00:05:25.662 END TEST event_scheduler 00:05:25.662 ************************************ 00:05:25.662 12:16:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:25.662 12:16:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:25.662 12:16:58 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.662 12:16:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.662 12:16:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.662 ************************************ 00:05:25.662 START TEST app_repeat 00:05:25.662 ************************************ 00:05:25.662 12:16:58 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=490195 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 490195' 00:05:25.662 Process app_repeat pid: 490195 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:25.662 spdk_app_start Round 0 00:05:25.662 12:16:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 490195 /var/tmp/spdk-nbd.sock 00:05:25.662 12:16:58 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 490195 ']' 00:05:25.662 12:16:58 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.662 12:16:58 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.662 12:16:58 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.662 12:16:58 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.662 12:16:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.662 [2024-10-30 12:16:58.316249] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:25.662 [2024-10-30 12:16:58.316472] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490195 ] 00:05:25.920 [2024-10-30 12:16:58.385699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.920 [2024-10-30 12:16:58.445954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.920 [2024-10-30 12:16:58.445957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.920 12:16:58 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.920 12:16:58 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:25.920 12:16:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.179 Malloc0 00:05:26.437 12:16:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.695 Malloc1 00:05:26.695 12:16:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.695 12:16:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.981 /dev/nbd0 00:05:26.981 12:16:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.981 12:16:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.981 1+0 records in 00:05:26.981 1+0 records out 00:05:26.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205296 s, 20.0 MB/s 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:26.981 12:16:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:26.981 12:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.981 12:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.981 12:16:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.238 /dev/nbd1 00:05:27.238 12:16:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.238 12:16:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.238 1+0 records in 00:05:27.238 1+0 records out 00:05:27.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245315 s, 16.7 MB/s 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:27.238 12:16:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:27.238 12:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.238 12:16:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.238 12:16:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.238 12:16:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.238 12:16:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.495 { 00:05:27.495 "nbd_device": "/dev/nbd0", 00:05:27.495 "bdev_name": "Malloc0" 00:05:27.495 }, 00:05:27.495 { 00:05:27.495 "nbd_device": "/dev/nbd1", 00:05:27.495 "bdev_name": "Malloc1" 00:05:27.495 } 00:05:27.495 ]' 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.495 { 00:05:27.495 "nbd_device": "/dev/nbd0", 00:05:27.495 "bdev_name": "Malloc0" 00:05:27.495 }, 00:05:27.495 { 00:05:27.495 "nbd_device": "/dev/nbd1", 00:05:27.495 "bdev_name": "Malloc1" 00:05:27.495 } 00:05:27.495 ]' 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.495 /dev/nbd1' 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.495 /dev/nbd1' 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.495 256+0 records in 00:05:27.495 256+0 records out 00:05:27.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050811 s, 206 MB/s 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.495 256+0 records in 00:05:27.495 256+0 records out 00:05:27.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205854 s, 50.9 MB/s 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.495 12:17:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.753 256+0 records in 00:05:27.753 256+0 records out 00:05:27.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228772 s, 45.8 MB/s 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.753 12:17:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.010 12:17:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.010 12:17:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.010 12:17:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.010 12:17:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.010 12:17:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.010 12:17:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.010 12:17:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.010 12:17:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.010 12:17:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.010 12:17:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.268 12:17:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.525 12:17:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.525 12:17:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.781 12:17:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.039 [2024-10-30 12:17:01.611867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.039 [2024-10-30 12:17:01.665373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.039 [2024-10-30 12:17:01.665373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.039 [2024-10-30 12:17:01.722126] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.039 [2024-10-30 12:17:01.722184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.318 12:17:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.318 12:17:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:32.318 spdk_app_start Round 1 00:05:32.318 12:17:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 490195 /var/tmp/spdk-nbd.sock 00:05:32.318 12:17:04 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 490195 ']' 00:05:32.318 12:17:04 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.318 12:17:04 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:32.318 12:17:04 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.318 12:17:04 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:32.318 12:17:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.318 12:17:04 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.318 12:17:04 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:32.318 12:17:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.318 Malloc0 00:05:32.318 12:17:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.576 Malloc1 00:05:32.576 12:17:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.576 12:17:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.834 /dev/nbd0 00:05:33.092 12:17:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.092 12:17:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.092 1+0 records in 00:05:33.092 1+0 records out 00:05:33.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144864 s, 28.3 MB/s 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:33.092 12:17:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:33.092 12:17:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.092 12:17:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.092 12:17:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.350 /dev/nbd1 00:05:33.350 12:17:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.350 12:17:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.350 1+0 records in 00:05:33.350 1+0 records out 00:05:33.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225273 s, 18.2 MB/s 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:33.350 12:17:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:33.350 12:17:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.350 12:17:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.350 12:17:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.350 12:17:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.350 12:17:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.608 { 00:05:33.608 "nbd_device": "/dev/nbd0", 00:05:33.608 "bdev_name": "Malloc0" 00:05:33.608 }, 00:05:33.608 { 00:05:33.608 "nbd_device": "/dev/nbd1", 00:05:33.608 "bdev_name": "Malloc1" 00:05:33.608 } 00:05:33.608 ]' 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.608 { 00:05:33.608 "nbd_device": "/dev/nbd0", 00:05:33.608 "bdev_name": "Malloc0" 00:05:33.608 }, 00:05:33.608 { 00:05:33.608 "nbd_device": "/dev/nbd1", 00:05:33.608 "bdev_name": "Malloc1" 00:05:33.608 } 00:05:33.608 ]' 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.608 /dev/nbd1' 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.608 /dev/nbd1' 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.608 256+0 records in 00:05:33.608 256+0 records out 00:05:33.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501206 s, 209 MB/s 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.608 256+0 records in 00:05:33.608 256+0 records out 00:05:33.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204154 s, 51.4 MB/s 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.608 12:17:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.608 256+0 records in 00:05:33.608 256+0 records out 00:05:33.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225662 s, 46.5 MB/s 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.609 12:17:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.867 12:17:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.867 12:17:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.867 12:17:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.867 12:17:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.867 12:17:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.867 12:17:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.867 12:17:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.867 12:17:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.867 12:17:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.867 12:17:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.434 12:17:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.434 12:17:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.434 12:17:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.434 12:17:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.692 12:17:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.692 12:17:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.692 12:17:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.692 12:17:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.692 12:17:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.692 12:17:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.692 12:17:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.692 12:17:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.692 12:17:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.692 12:17:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.950 12:17:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.208 [2024-10-30 12:17:07.643910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.208 [2024-10-30 12:17:07.697851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.208 [2024-10-30 12:17:07.697851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.208 [2024-10-30 12:17:07.753959] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.208 [2024-10-30 12:17:07.754022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.487 12:17:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.487 12:17:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:38.487 spdk_app_start Round 2 00:05:38.487 12:17:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 490195 /var/tmp/spdk-nbd.sock 00:05:38.487 12:17:10 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 490195 ']' 00:05:38.487 12:17:10 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.487 12:17:10 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.487 12:17:10 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.487 12:17:10 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.487 12:17:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.487 12:17:10 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.487 12:17:10 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:38.487 12:17:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.487 Malloc0 00:05:38.487 12:17:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.746 Malloc1 00:05:38.746 12:17:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.746 12:17:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.004 /dev/nbd0 00:05:39.004 12:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.004 12:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.004 1+0 records in 00:05:39.004 1+0 records out 00:05:39.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000142973 s, 28.6 MB/s 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:39.004 12:17:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:39.004 12:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.004 12:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.004 12:17:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.262 /dev/nbd1 00:05:39.262 12:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.262 12:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.262 1+0 records in 00:05:39.262 1+0 records out 00:05:39.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217773 s, 18.8 MB/s 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:39.262 12:17:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:39.262 12:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.262 12:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.262 12:17:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.262 12:17:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.262 12:17:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.520 12:17:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.520 { 00:05:39.520 "nbd_device": "/dev/nbd0", 00:05:39.520 "bdev_name": "Malloc0" 00:05:39.520 }, 00:05:39.520 { 00:05:39.520 "nbd_device": "/dev/nbd1", 00:05:39.520 "bdev_name": "Malloc1" 00:05:39.520 } 00:05:39.520 ]' 00:05:39.520 12:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.520 { 00:05:39.520 "nbd_device": "/dev/nbd0", 00:05:39.520 "bdev_name": "Malloc0" 00:05:39.520 }, 00:05:39.520 { 00:05:39.520 "nbd_device": "/dev/nbd1", 00:05:39.521 "bdev_name": "Malloc1" 00:05:39.521 } 00:05:39.521 ]' 00:05:39.521 12:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.780 /dev/nbd1' 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.780 /dev/nbd1' 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.780 256+0 records in 00:05:39.780 256+0 records out 00:05:39.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384089 s, 273 MB/s 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.780 256+0 records in 00:05:39.780 256+0 records out 00:05:39.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019885 s, 52.7 MB/s 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.780 256+0 records in 00:05:39.780 256+0 records out 00:05:39.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221389 s, 47.4 MB/s 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.780 12:17:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.039 12:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.039 12:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.039 12:17:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.039 12:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.039 12:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.039 12:17:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.039 12:17:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.039 12:17:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.039 12:17:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.039 12:17:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.297 12:17:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.555 12:17:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.555 12:17:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.813 12:17:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.071 [2024-10-30 12:17:13.680843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.071 [2024-10-30 12:17:13.733887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.071 [2024-10-30 12:17:13.733890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.329 [2024-10-30 12:17:13.792819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.329 [2024-10-30 12:17:13.792891] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.920 12:17:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 490195 /var/tmp/spdk-nbd.sock 00:05:43.920 12:17:16 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 490195 ']' 00:05:43.920 12:17:16 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.920 12:17:16 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:43.920 12:17:16 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.920 12:17:16 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:43.920 12:17:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:44.226 12:17:16 event.app_repeat -- event/event.sh@39 -- # killprocess 490195 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 490195 ']' 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 490195 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 490195 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 490195' 00:05:44.226 killing process with pid 490195 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@971 -- # kill 490195 00:05:44.226 12:17:16 event.app_repeat -- common/autotest_common.sh@976 -- # wait 490195 00:05:44.514 spdk_app_start is called in Round 0. 00:05:44.514 Shutdown signal received, stop current app iteration 00:05:44.514 Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 reinitialization... 00:05:44.514 spdk_app_start is called in Round 1. 00:05:44.514 Shutdown signal received, stop current app iteration 00:05:44.514 Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 reinitialization... 00:05:44.514 spdk_app_start is called in Round 2. 00:05:44.514 Shutdown signal received, stop current app iteration 00:05:44.514 Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 reinitialization... 00:05:44.514 spdk_app_start is called in Round 3. 00:05:44.514 Shutdown signal received, stop current app iteration 00:05:44.514 12:17:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:44.514 12:17:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:44.514 00:05:44.514 real 0m18.674s 00:05:44.514 user 0m41.281s 00:05:44.514 sys 0m3.197s 00:05:44.514 12:17:16 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.514 12:17:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.514 ************************************ 00:05:44.514 END TEST app_repeat 00:05:44.514 ************************************ 00:05:44.514 12:17:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:44.514 12:17:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:44.514 12:17:16 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.514 12:17:16 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.514 12:17:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.514 ************************************ 00:05:44.514 START TEST cpu_locks 00:05:44.514 ************************************ 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:44.514 * Looking for test storage... 00:05:44.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.514 12:17:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.514 --rc genhtml_branch_coverage=1 00:05:44.514 --rc genhtml_function_coverage=1 00:05:44.514 --rc genhtml_legend=1 00:05:44.514 --rc geninfo_all_blocks=1 00:05:44.514 --rc geninfo_unexecuted_blocks=1 00:05:44.514 00:05:44.514 ' 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.514 --rc genhtml_branch_coverage=1 00:05:44.514 --rc genhtml_function_coverage=1 00:05:44.514 --rc genhtml_legend=1 00:05:44.514 --rc geninfo_all_blocks=1 00:05:44.514 --rc geninfo_unexecuted_blocks=1 00:05:44.514 00:05:44.514 ' 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.514 --rc genhtml_branch_coverage=1 00:05:44.514 --rc genhtml_function_coverage=1 00:05:44.514 --rc genhtml_legend=1 00:05:44.514 --rc geninfo_all_blocks=1 00:05:44.514 --rc geninfo_unexecuted_blocks=1 00:05:44.514 00:05:44.514 ' 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.514 --rc genhtml_branch_coverage=1 00:05:44.514 --rc genhtml_function_coverage=1 00:05:44.514 --rc genhtml_legend=1 00:05:44.514 --rc geninfo_all_blocks=1 00:05:44.514 --rc geninfo_unexecuted_blocks=1 00:05:44.514 00:05:44.514 ' 00:05:44.514 12:17:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:44.514 12:17:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:44.514 12:17:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:44.514 12:17:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.514 12:17:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.774 ************************************ 00:05:44.774 START TEST default_locks 00:05:44.774 ************************************ 00:05:44.774 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:44.774 12:17:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=492685 00:05:44.774 12:17:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.774 12:17:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 492685 00:05:44.774 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 492685 ']' 00:05:44.774 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.774 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.774 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.774 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.774 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.774 [2024-10-30 12:17:17.245012] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:44.774 [2024-10-30 12:17:17.245093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492685 ] 00:05:44.774 [2024-10-30 12:17:17.310962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.774 [2024-10-30 12:17:17.371177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.032 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.032 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:45.032 12:17:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 492685 00:05:45.032 12:17:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 492685 00:05:45.032 12:17:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.290 lslocks: write error 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 492685 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 492685 ']' 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 492685 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 492685 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 492685' 00:05:45.290 killing process with pid 492685 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 492685 00:05:45.290 12:17:17 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 492685 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 492685 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 492685 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 492685 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 492685 ']' 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (492685) - No such process 00:05:45.856 ERROR: process (pid: 492685) is no longer running 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.856 00:05:45.856 real 0m1.184s 00:05:45.856 user 0m1.133s 00:05:45.856 sys 0m0.508s 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.856 12:17:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.856 ************************************ 00:05:45.856 END TEST default_locks 00:05:45.856 ************************************ 00:05:45.856 12:17:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:45.856 12:17:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.856 12:17:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.856 12:17:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.856 ************************************ 00:05:45.856 START TEST default_locks_via_rpc 00:05:45.856 ************************************ 00:05:45.856 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:45.856 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=492855 00:05:45.856 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.856 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 492855 00:05:45.856 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 492855 ']' 00:05:45.856 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.856 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.856 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.856 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.856 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.856 [2024-10-30 12:17:18.480858] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:45.856 [2024-10-30 12:17:18.480943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492855 ] 00:05:46.115 [2024-10-30 12:17:18.549040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.115 [2024-10-30 12:17:18.607577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 492855 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 492855 00:05:46.373 12:17:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 492855 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 492855 ']' 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 492855 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 492855 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 492855' 00:05:46.631 killing process with pid 492855 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 492855 00:05:46.631 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 492855 00:05:46.890 00:05:46.890 real 0m1.148s 00:05:46.890 user 0m1.122s 00:05:46.890 sys 0m0.496s 00:05:46.890 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.890 12:17:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.890 ************************************ 00:05:46.890 END TEST default_locks_via_rpc 00:05:46.890 ************************************ 00:05:47.150 12:17:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:47.150 12:17:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.150 12:17:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.150 12:17:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.150 ************************************ 00:05:47.150 START TEST non_locking_app_on_locked_coremask 00:05:47.150 ************************************ 00:05:47.150 12:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:47.150 12:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=493021 00:05:47.150 12:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.150 12:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 493021 /var/tmp/spdk.sock 00:05:47.150 12:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 493021 ']' 00:05:47.150 12:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.150 12:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.150 12:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.150 12:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.150 12:17:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.150 [2024-10-30 12:17:19.678698] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:47.150 [2024-10-30 12:17:19.678783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493021 ] 00:05:47.150 [2024-10-30 12:17:19.747265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.150 [2024-10-30 12:17:19.807377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=493139 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 493139 /var/tmp/spdk2.sock 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 493139 ']' 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.408 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.666 [2024-10-30 12:17:20.134318] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:47.666 [2024-10-30 12:17:20.134394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493139 ] 00:05:47.666 [2024-10-30 12:17:20.233041] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.666 [2024-10-30 12:17:20.233083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.925 [2024-10-30 12:17:20.351701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.491 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:48.492 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:48.492 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 493021 00:05:48.492 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 493021 00:05:48.492 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.058 lslocks: write error 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 493021 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 493021 ']' 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 493021 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 493021 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 493021' 00:05:49.058 killing process with pid 493021 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 493021 00:05:49.058 12:17:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 493021 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 493139 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 493139 ']' 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 493139 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 493139 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 493139' 00:05:49.992 killing process with pid 493139 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 493139 00:05:49.992 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 493139 00:05:50.558 00:05:50.558 real 0m3.320s 00:05:50.558 user 0m3.543s 00:05:50.559 sys 0m1.058s 00:05:50.559 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.559 12:17:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.559 ************************************ 00:05:50.559 END TEST non_locking_app_on_locked_coremask 00:05:50.559 ************************************ 00:05:50.559 12:17:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:50.559 12:17:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.559 12:17:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.559 12:17:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.559 ************************************ 00:05:50.559 START TEST locking_app_on_unlocked_coremask 00:05:50.559 ************************************ 00:05:50.559 12:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:50.559 12:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=493455 00:05:50.559 12:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:50.559 12:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 493455 /var/tmp/spdk.sock 00:05:50.559 12:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 493455 ']' 00:05:50.559 12:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.559 12:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.559 12:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.559 12:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.559 12:17:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.559 [2024-10-30 12:17:23.050276] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:50.559 [2024-10-30 12:17:23.050389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493455 ] 00:05:50.559 [2024-10-30 12:17:23.118078] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.559 [2024-10-30 12:17:23.118111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.559 [2024-10-30 12:17:23.174453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=493573 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 493573 /var/tmp/spdk2.sock 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 493573 ']' 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.817 12:17:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.817 [2024-10-30 12:17:23.488068] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:50.817 [2024-10-30 12:17:23.488169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493573 ] 00:05:51.075 [2024-10-30 12:17:23.588280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.075 [2024-10-30 12:17:23.696131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.009 12:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.009 12:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:52.009 12:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 493573 00:05:52.009 12:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 493573 00:05:52.009 12:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.575 lslocks: write error 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 493455 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 493455 ']' 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 493455 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 493455 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 493455' 00:05:52.575 killing process with pid 493455 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 493455 00:05:52.575 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 493455 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 493573 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 493573 ']' 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 493573 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 493573 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 493573' 00:05:53.511 killing process with pid 493573 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 493573 00:05:53.511 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 493573 00:05:53.771 00:05:53.771 real 0m3.293s 00:05:53.771 user 0m3.563s 00:05:53.771 sys 0m1.044s 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.771 ************************************ 00:05:53.771 END TEST locking_app_on_unlocked_coremask 00:05:53.771 ************************************ 00:05:53.771 12:17:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:53.771 12:17:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.771 12:17:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.771 12:17:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.771 ************************************ 00:05:53.771 START TEST locking_app_on_locked_coremask 00:05:53.771 ************************************ 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=493889 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 493889 /var/tmp/spdk.sock 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 493889 ']' 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.771 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.771 [2024-10-30 12:17:26.395214] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:53.771 [2024-10-30 12:17:26.395325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493889 ] 00:05:54.029 [2024-10-30 12:17:26.464100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.029 [2024-10-30 12:17:26.524713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=494013 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 494013 /var/tmp/spdk2.sock 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 494013 /var/tmp/spdk2.sock 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 494013 /var/tmp/spdk2.sock 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 494013 ']' 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.288 12:17:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.288 [2024-10-30 12:17:26.844056] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:54.288 [2024-10-30 12:17:26.844141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494013 ] 00:05:54.288 [2024-10-30 12:17:26.941960] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 493889 has claimed it. 00:05:54.288 [2024-10-30 12:17:26.942028] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (494013) - No such process 00:05:55.223 ERROR: process (pid: 494013) is no longer running 00:05:55.223 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.223 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:55.223 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:55.223 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.223 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.223 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.223 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 493889 00:05:55.223 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 493889 00:05:55.223 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.480 lslocks: write error 00:05:55.480 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 493889 00:05:55.480 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 493889 ']' 00:05:55.481 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 493889 00:05:55.481 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:55.481 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:55.481 12:17:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 493889 00:05:55.481 12:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:55.481 12:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:55.481 12:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 493889' 00:05:55.481 killing process with pid 493889 00:05:55.481 12:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 493889 00:05:55.481 12:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 493889 00:05:55.740 00:05:55.740 real 0m2.071s 00:05:55.740 user 0m2.286s 00:05:55.740 sys 0m0.655s 00:05:55.740 12:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.740 12:17:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.740 ************************************ 00:05:55.740 END TEST locking_app_on_locked_coremask 00:05:55.740 ************************************ 00:05:55.998 12:17:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:55.998 12:17:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.998 12:17:28 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.998 12:17:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.998 ************************************ 00:05:55.998 START TEST locking_overlapped_coremask 00:05:55.998 ************************************ 00:05:55.998 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:55.998 12:17:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=494187 00:05:55.998 12:17:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:55.998 12:17:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 494187 /var/tmp/spdk.sock 00:05:55.998 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 494187 ']' 00:05:55.998 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.998 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.998 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.998 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.998 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.998 [2024-10-30 12:17:28.516001] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:55.999 [2024-10-30 12:17:28.516105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494187 ] 00:05:55.999 [2024-10-30 12:17:28.580122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.999 [2024-10-30 12:17:28.635152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.999 [2024-10-30 12:17:28.635273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.999 [2024-10-30 12:17:28.635302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=494313 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 494313 /var/tmp/spdk2.sock 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 494313 /var/tmp/spdk2.sock 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 494313 /var/tmp/spdk2.sock 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 494313 ']' 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.256 12:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.513 [2024-10-30 12:17:28.964936] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:56.514 [2024-10-30 12:17:28.965034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494313 ] 00:05:56.514 [2024-10-30 12:17:29.067145] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 494187 has claimed it. 00:05:56.514 [2024-10-30 12:17:29.067208] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (494313) - No such process 00:05:57.078 ERROR: process (pid: 494313) is no longer running 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 494187 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 494187 ']' 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 494187 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 494187 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 494187' 00:05:57.078 killing process with pid 494187 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 494187 00:05:57.078 12:17:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 494187 00:05:57.643 00:05:57.643 real 0m1.668s 00:05:57.643 user 0m4.637s 00:05:57.643 sys 0m0.463s 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.643 ************************************ 00:05:57.643 END TEST locking_overlapped_coremask 00:05:57.643 ************************************ 00:05:57.643 12:17:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:57.643 12:17:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.643 12:17:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.643 12:17:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.643 ************************************ 00:05:57.643 START TEST locking_overlapped_coremask_via_rpc 00:05:57.643 ************************************ 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=494475 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 494475 /var/tmp/spdk.sock 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 494475 ']' 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.643 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.643 [2024-10-30 12:17:30.241853] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:57.643 [2024-10-30 12:17:30.241940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494475 ] 00:05:57.643 [2024-10-30 12:17:30.308154] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.643 [2024-10-30 12:17:30.308199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.900 [2024-10-30 12:17:30.372276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.900 [2024-10-30 12:17:30.372332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.900 [2024-10-30 12:17:30.372335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.157 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:58.157 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:58.157 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=494491 00:05:58.157 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 494491 /var/tmp/spdk2.sock 00:05:58.157 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:58.157 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 494491 ']' 00:05:58.157 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.157 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:58.157 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.157 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:58.158 12:17:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.158 [2024-10-30 12:17:30.699636] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:05:58.158 [2024-10-30 12:17:30.699731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494491 ] 00:05:58.158 [2024-10-30 12:17:30.803486] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.158 [2024-10-30 12:17:30.803524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.415 [2024-10-30 12:17:30.929684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.415 [2024-10-30 12:17:30.929746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.415 [2024-10-30 12:17:30.929748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.980 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:58.980 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:58.980 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.980 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.980 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.238 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.238 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.238 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:59.238 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.239 [2024-10-30 12:17:31.673355] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 494475 has claimed it. 00:05:59.239 request: 00:05:59.239 { 00:05:59.239 "method": "framework_enable_cpumask_locks", 00:05:59.239 "req_id": 1 00:05:59.239 } 00:05:59.239 Got JSON-RPC error response 00:05:59.239 response: 00:05:59.239 { 00:05:59.239 "code": -32603, 00:05:59.239 "message": "Failed to claim CPU core: 2" 00:05:59.239 } 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 494475 /var/tmp/spdk.sock 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 494475 ']' 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:59.239 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.497 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:59.497 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:59.497 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 494491 /var/tmp/spdk2.sock 00:05:59.497 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 494491 ']' 00:05:59.497 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.497 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:59.497 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.497 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:59.497 12:17:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.755 12:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:59.755 12:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:59.755 12:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:59.755 12:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.755 12:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.755 12:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.755 00:05:59.755 real 0m2.055s 00:05:59.755 user 0m1.154s 00:05:59.755 sys 0m0.173s 00:05:59.755 12:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.755 12:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.755 ************************************ 00:05:59.755 END TEST locking_overlapped_coremask_via_rpc 00:05:59.755 ************************************ 00:05:59.755 12:17:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:59.755 12:17:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 494475 ]] 00:05:59.755 12:17:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 494475 00:05:59.755 12:17:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 494475 ']' 00:05:59.755 12:17:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 494475 00:05:59.755 12:17:32 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:59.755 12:17:32 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:59.755 12:17:32 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 494475 00:05:59.755 12:17:32 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:59.755 12:17:32 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:59.755 12:17:32 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 494475' 00:05:59.755 killing process with pid 494475 00:05:59.755 12:17:32 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 494475 00:05:59.755 12:17:32 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 494475 00:06:00.323 12:17:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 494491 ]] 00:06:00.323 12:17:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 494491 00:06:00.323 12:17:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 494491 ']' 00:06:00.323 12:17:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 494491 00:06:00.323 12:17:32 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:00.323 12:17:32 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:00.323 12:17:32 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 494491 00:06:00.323 12:17:32 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:00.323 12:17:32 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:00.323 12:17:32 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 494491' 00:06:00.323 killing process with pid 494491 00:06:00.323 12:17:32 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 494491 00:06:00.323 12:17:32 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 494491 00:06:00.581 12:17:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.581 12:17:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:00.581 12:17:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 494475 ]] 00:06:00.581 12:17:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 494475 00:06:00.581 12:17:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 494475 ']' 00:06:00.581 12:17:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 494475 00:06:00.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (494475) - No such process 00:06:00.582 12:17:33 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 494475 is not found' 00:06:00.582 Process with pid 494475 is not found 00:06:00.582 12:17:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 494491 ]] 00:06:00.582 12:17:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 494491 00:06:00.582 12:17:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 494491 ']' 00:06:00.582 12:17:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 494491 00:06:00.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (494491) - No such process 00:06:00.582 12:17:33 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 494491 is not found' 00:06:00.582 Process with pid 494491 is not found 00:06:00.582 12:17:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.582 00:06:00.582 real 0m16.181s 00:06:00.582 user 0m29.150s 00:06:00.582 sys 0m5.331s 00:06:00.582 12:17:33 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.582 12:17:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.582 ************************************ 00:06:00.582 END TEST cpu_locks 00:06:00.582 ************************************ 00:06:00.582 00:06:00.582 real 0m40.736s 00:06:00.582 user 1m19.519s 00:06:00.582 sys 0m9.332s 00:06:00.582 12:17:33 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.582 12:17:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.582 ************************************ 00:06:00.582 END TEST event 00:06:00.582 ************************************ 00:06:00.582 12:17:33 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:00.582 12:17:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.582 12:17:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.582 12:17:33 -- common/autotest_common.sh@10 -- # set +x 00:06:00.840 ************************************ 00:06:00.840 START TEST thread 00:06:00.840 ************************************ 00:06:00.840 12:17:33 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:00.840 * Looking for test storage... 00:06:00.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:00.840 12:17:33 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.840 12:17:33 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.840 12:17:33 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.840 12:17:33 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.840 12:17:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.840 12:17:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.840 12:17:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.840 12:17:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.840 12:17:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.840 12:17:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.840 12:17:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.840 12:17:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.840 12:17:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.840 12:17:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.840 12:17:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.840 12:17:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:00.840 12:17:33 thread -- scripts/common.sh@345 -- # : 1 00:06:00.840 12:17:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.840 12:17:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.840 12:17:33 thread -- scripts/common.sh@365 -- # decimal 1 00:06:00.840 12:17:33 thread -- scripts/common.sh@353 -- # local d=1 00:06:00.840 12:17:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.840 12:17:33 thread -- scripts/common.sh@355 -- # echo 1 00:06:00.840 12:17:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.840 12:17:33 thread -- scripts/common.sh@366 -- # decimal 2 00:06:00.840 12:17:33 thread -- scripts/common.sh@353 -- # local d=2 00:06:00.840 12:17:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.840 12:17:33 thread -- scripts/common.sh@355 -- # echo 2 00:06:00.840 12:17:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.840 12:17:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.840 12:17:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.840 12:17:33 thread -- scripts/common.sh@368 -- # return 0 00:06:00.840 12:17:33 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.840 12:17:33 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.840 --rc genhtml_branch_coverage=1 00:06:00.840 --rc genhtml_function_coverage=1 00:06:00.840 --rc genhtml_legend=1 00:06:00.840 --rc geninfo_all_blocks=1 00:06:00.840 --rc geninfo_unexecuted_blocks=1 00:06:00.840 00:06:00.840 ' 00:06:00.840 12:17:33 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.840 --rc genhtml_branch_coverage=1 00:06:00.840 --rc genhtml_function_coverage=1 00:06:00.840 --rc genhtml_legend=1 00:06:00.840 --rc geninfo_all_blocks=1 00:06:00.840 --rc geninfo_unexecuted_blocks=1 00:06:00.840 00:06:00.840 ' 00:06:00.840 12:17:33 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.840 --rc genhtml_branch_coverage=1 00:06:00.841 --rc genhtml_function_coverage=1 00:06:00.841 --rc genhtml_legend=1 00:06:00.841 --rc geninfo_all_blocks=1 00:06:00.841 --rc geninfo_unexecuted_blocks=1 00:06:00.841 00:06:00.841 ' 00:06:00.841 12:17:33 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.841 --rc genhtml_branch_coverage=1 00:06:00.841 --rc genhtml_function_coverage=1 00:06:00.841 --rc genhtml_legend=1 00:06:00.841 --rc geninfo_all_blocks=1 00:06:00.841 --rc geninfo_unexecuted_blocks=1 00:06:00.841 00:06:00.841 ' 00:06:00.841 12:17:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.841 12:17:33 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:00.841 12:17:33 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.841 12:17:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.841 ************************************ 00:06:00.841 START TEST thread_poller_perf 00:06:00.841 ************************************ 00:06:00.841 12:17:33 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.841 [2024-10-30 12:17:33.461103] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:06:00.841 [2024-10-30 12:17:33.461169] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494984 ] 00:06:01.099 [2024-10-30 12:17:33.527173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.099 [2024-10-30 12:17:33.581651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.100 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.035 [2024-10-30T11:17:34.716Z] ====================================== 00:06:02.035 [2024-10-30T11:17:34.716Z] busy:2707485222 (cyc) 00:06:02.035 [2024-10-30T11:17:34.716Z] total_run_count: 366000 00:06:02.035 [2024-10-30T11:17:34.716Z] tsc_hz: 2700000000 (cyc) 00:06:02.035 [2024-10-30T11:17:34.716Z] ====================================== 00:06:02.035 [2024-10-30T11:17:34.716Z] poller_cost: 7397 (cyc), 2739 (nsec) 00:06:02.035 00:06:02.035 real 0m1.204s 00:06:02.035 user 0m1.139s 00:06:02.035 sys 0m0.059s 00:06:02.035 12:17:34 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.035 12:17:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.035 ************************************ 00:06:02.035 END TEST thread_poller_perf 00:06:02.035 ************************************ 00:06:02.035 12:17:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.035 12:17:34 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:02.035 12:17:34 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.035 12:17:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.035 ************************************ 00:06:02.035 START TEST thread_poller_perf 00:06:02.035 ************************************ 00:06:02.035 12:17:34 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.293 [2024-10-30 12:17:34.719061] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:06:02.294 [2024-10-30 12:17:34.719146] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495136 ] 00:06:02.294 [2024-10-30 12:17:34.786886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.294 [2024-10-30 12:17:34.842641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.294 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.228 [2024-10-30T11:17:35.909Z] ====================================== 00:06:03.228 [2024-10-30T11:17:35.909Z] busy:2702376366 (cyc) 00:06:03.228 [2024-10-30T11:17:35.909Z] total_run_count: 4821000 00:06:03.228 [2024-10-30T11:17:35.909Z] tsc_hz: 2700000000 (cyc) 00:06:03.228 [2024-10-30T11:17:35.909Z] ====================================== 00:06:03.228 [2024-10-30T11:17:35.909Z] poller_cost: 560 (cyc), 207 (nsec) 00:06:03.228 00:06:03.228 real 0m1.202s 00:06:03.228 user 0m1.129s 00:06:03.228 sys 0m0.068s 00:06:03.228 12:17:35 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.228 12:17:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.228 ************************************ 00:06:03.228 END TEST thread_poller_perf 00:06:03.228 ************************************ 00:06:03.487 12:17:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:03.487 00:06:03.487 real 0m2.655s 00:06:03.487 user 0m2.400s 00:06:03.487 sys 0m0.259s 00:06:03.487 12:17:35 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.487 12:17:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.487 ************************************ 00:06:03.487 END TEST thread 00:06:03.487 ************************************ 00:06:03.487 12:17:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:03.487 12:17:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:03.487 12:17:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.487 12:17:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.487 12:17:35 -- common/autotest_common.sh@10 -- # set +x 00:06:03.487 ************************************ 00:06:03.487 START TEST app_cmdline 00:06:03.487 ************************************ 00:06:03.487 12:17:35 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:03.487 * Looking for test storage... 00:06:03.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.487 12:17:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.487 --rc genhtml_branch_coverage=1 00:06:03.487 --rc genhtml_function_coverage=1 00:06:03.487 --rc genhtml_legend=1 00:06:03.487 --rc geninfo_all_blocks=1 00:06:03.487 --rc geninfo_unexecuted_blocks=1 00:06:03.487 00:06:03.487 ' 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.487 --rc genhtml_branch_coverage=1 00:06:03.487 --rc genhtml_function_coverage=1 00:06:03.487 --rc genhtml_legend=1 00:06:03.487 --rc geninfo_all_blocks=1 00:06:03.487 --rc geninfo_unexecuted_blocks=1 00:06:03.487 00:06:03.487 ' 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.487 --rc genhtml_branch_coverage=1 00:06:03.487 --rc genhtml_function_coverage=1 00:06:03.487 --rc genhtml_legend=1 00:06:03.487 --rc geninfo_all_blocks=1 00:06:03.487 --rc geninfo_unexecuted_blocks=1 00:06:03.487 00:06:03.487 ' 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:03.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.487 --rc genhtml_branch_coverage=1 00:06:03.487 --rc genhtml_function_coverage=1 00:06:03.487 --rc genhtml_legend=1 00:06:03.487 --rc geninfo_all_blocks=1 00:06:03.487 --rc geninfo_unexecuted_blocks=1 00:06:03.487 00:06:03.487 ' 00:06:03.487 12:17:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:03.487 12:17:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=495345 00:06:03.487 12:17:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:03.487 12:17:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 495345 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 495345 ']' 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:03.487 12:17:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.488 [2024-10-30 12:17:36.168760] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:06:03.488 [2024-10-30 12:17:36.168862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495345 ] 00:06:03.747 [2024-10-30 12:17:36.233890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.747 [2024-10-30 12:17:36.293963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.005 12:17:36 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:04.005 12:17:36 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:04.005 12:17:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:04.263 { 00:06:04.263 "version": "SPDK v25.01-pre git sha1 0a41b9e4e", 00:06:04.263 "fields": { 00:06:04.263 "major": 25, 00:06:04.263 "minor": 1, 00:06:04.263 "patch": 0, 00:06:04.263 "suffix": "-pre", 00:06:04.263 "commit": "0a41b9e4e" 00:06:04.263 } 00:06:04.263 } 00:06:04.263 12:17:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:04.263 12:17:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:04.263 12:17:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:04.263 12:17:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:04.263 12:17:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:04.263 12:17:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.263 12:17:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.263 12:17:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:04.263 12:17:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:04.263 12:17:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:04.263 12:17:36 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.521 request: 00:06:04.521 { 00:06:04.521 "method": "env_dpdk_get_mem_stats", 00:06:04.521 "req_id": 1 00:06:04.521 } 00:06:04.521 Got JSON-RPC error response 00:06:04.521 response: 00:06:04.521 { 00:06:04.521 "code": -32601, 00:06:04.521 "message": "Method not found" 00:06:04.521 } 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.521 12:17:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 495345 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 495345 ']' 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 495345 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 495345 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 495345' 00:06:04.521 killing process with pid 495345 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@971 -- # kill 495345 00:06:04.521 12:17:37 app_cmdline -- common/autotest_common.sh@976 -- # wait 495345 00:06:05.088 00:06:05.088 real 0m1.584s 00:06:05.088 user 0m1.985s 00:06:05.088 sys 0m0.451s 00:06:05.088 12:17:37 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.088 12:17:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.088 ************************************ 00:06:05.088 END TEST app_cmdline 00:06:05.088 ************************************ 00:06:05.088 12:17:37 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:05.088 12:17:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:05.088 12:17:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.088 12:17:37 -- common/autotest_common.sh@10 -- # set +x 00:06:05.088 ************************************ 00:06:05.088 START TEST version 00:06:05.088 ************************************ 00:06:05.088 12:17:37 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:05.088 * Looking for test storage... 00:06:05.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:05.088 12:17:37 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.088 12:17:37 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.088 12:17:37 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.088 12:17:37 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.088 12:17:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.088 12:17:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.088 12:17:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.088 12:17:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.088 12:17:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.088 12:17:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.088 12:17:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.088 12:17:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.088 12:17:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.088 12:17:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.088 12:17:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.088 12:17:37 version -- scripts/common.sh@344 -- # case "$op" in 00:06:05.088 12:17:37 version -- scripts/common.sh@345 -- # : 1 00:06:05.088 12:17:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.088 12:17:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.088 12:17:37 version -- scripts/common.sh@365 -- # decimal 1 00:06:05.088 12:17:37 version -- scripts/common.sh@353 -- # local d=1 00:06:05.088 12:17:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.088 12:17:37 version -- scripts/common.sh@355 -- # echo 1 00:06:05.088 12:17:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.088 12:17:37 version -- scripts/common.sh@366 -- # decimal 2 00:06:05.088 12:17:37 version -- scripts/common.sh@353 -- # local d=2 00:06:05.088 12:17:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.088 12:17:37 version -- scripts/common.sh@355 -- # echo 2 00:06:05.088 12:17:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.088 12:17:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.088 12:17:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.088 12:17:37 version -- scripts/common.sh@368 -- # return 0 00:06:05.088 12:17:37 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.088 12:17:37 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.088 --rc genhtml_branch_coverage=1 00:06:05.088 --rc genhtml_function_coverage=1 00:06:05.088 --rc genhtml_legend=1 00:06:05.088 --rc geninfo_all_blocks=1 00:06:05.088 --rc geninfo_unexecuted_blocks=1 00:06:05.088 00:06:05.088 ' 00:06:05.088 12:17:37 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.088 --rc genhtml_branch_coverage=1 00:06:05.088 --rc genhtml_function_coverage=1 00:06:05.088 --rc genhtml_legend=1 00:06:05.088 --rc geninfo_all_blocks=1 00:06:05.089 --rc geninfo_unexecuted_blocks=1 00:06:05.089 00:06:05.089 ' 00:06:05.089 12:17:37 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.089 --rc genhtml_branch_coverage=1 00:06:05.089 --rc genhtml_function_coverage=1 00:06:05.089 --rc genhtml_legend=1 00:06:05.089 --rc geninfo_all_blocks=1 00:06:05.089 --rc geninfo_unexecuted_blocks=1 00:06:05.089 00:06:05.089 ' 00:06:05.089 12:17:37 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.089 --rc genhtml_branch_coverage=1 00:06:05.089 --rc genhtml_function_coverage=1 00:06:05.089 --rc genhtml_legend=1 00:06:05.089 --rc geninfo_all_blocks=1 00:06:05.089 --rc geninfo_unexecuted_blocks=1 00:06:05.089 00:06:05.089 ' 00:06:05.089 12:17:37 version -- app/version.sh@17 -- # get_header_version major 00:06:05.089 12:17:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.089 12:17:37 version -- app/version.sh@14 -- # cut -f2 00:06:05.089 12:17:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.348 12:17:37 version -- app/version.sh@17 -- # major=25 00:06:05.348 12:17:37 version -- app/version.sh@18 -- # get_header_version minor 00:06:05.348 12:17:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.348 12:17:37 version -- app/version.sh@14 -- # cut -f2 00:06:05.348 12:17:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.348 12:17:37 version -- app/version.sh@18 -- # minor=1 00:06:05.348 12:17:37 version -- app/version.sh@19 -- # get_header_version patch 00:06:05.348 12:17:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.348 12:17:37 version -- app/version.sh@14 -- # cut -f2 00:06:05.348 12:17:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.348 12:17:37 version -- app/version.sh@19 -- # patch=0 00:06:05.348 12:17:37 version -- app/version.sh@20 -- # get_header_version suffix 00:06:05.348 12:17:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:05.348 12:17:37 version -- app/version.sh@14 -- # cut -f2 00:06:05.348 12:17:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:05.348 12:17:37 version -- app/version.sh@20 -- # suffix=-pre 00:06:05.348 12:17:37 version -- app/version.sh@22 -- # version=25.1 00:06:05.348 12:17:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:05.348 12:17:37 version -- app/version.sh@28 -- # version=25.1rc0 00:06:05.348 12:17:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:05.348 12:17:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:05.348 12:17:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:05.348 12:17:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:05.348 00:06:05.348 real 0m0.204s 00:06:05.348 user 0m0.134s 00:06:05.348 sys 0m0.096s 00:06:05.348 12:17:37 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.348 12:17:37 version -- common/autotest_common.sh@10 -- # set +x 00:06:05.348 ************************************ 00:06:05.348 END TEST version 00:06:05.348 ************************************ 00:06:05.348 12:17:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:05.348 12:17:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:05.348 12:17:37 -- spdk/autotest.sh@194 -- # uname -s 00:06:05.348 12:17:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:05.348 12:17:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:05.348 12:17:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:05.348 12:17:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:05.348 12:17:37 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:05.348 12:17:37 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:05.348 12:17:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:05.348 12:17:37 -- common/autotest_common.sh@10 -- # set +x 00:06:05.348 12:17:37 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:05.348 12:17:37 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:05.348 12:17:37 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:05.348 12:17:37 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:05.348 12:17:37 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:05.348 12:17:37 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:05.348 12:17:37 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:05.348 12:17:37 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:05.348 12:17:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.348 12:17:37 -- common/autotest_common.sh@10 -- # set +x 00:06:05.348 ************************************ 00:06:05.348 START TEST nvmf_tcp 00:06:05.348 ************************************ 00:06:05.348 12:17:37 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:05.348 * Looking for test storage... 00:06:05.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:05.349 12:17:37 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.349 12:17:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.349 12:17:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.349 12:17:38 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.349 12:17:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:05.608 12:17:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.608 12:17:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.608 12:17:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.608 12:17:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:05.608 12:17:38 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.608 12:17:38 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.608 --rc genhtml_branch_coverage=1 00:06:05.608 --rc genhtml_function_coverage=1 00:06:05.608 --rc genhtml_legend=1 00:06:05.608 --rc geninfo_all_blocks=1 00:06:05.608 --rc geninfo_unexecuted_blocks=1 00:06:05.608 00:06:05.608 ' 00:06:05.608 12:17:38 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.608 --rc genhtml_branch_coverage=1 00:06:05.608 --rc genhtml_function_coverage=1 00:06:05.608 --rc genhtml_legend=1 00:06:05.608 --rc geninfo_all_blocks=1 00:06:05.608 --rc geninfo_unexecuted_blocks=1 00:06:05.608 00:06:05.608 ' 00:06:05.608 12:17:38 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.608 --rc genhtml_branch_coverage=1 00:06:05.608 --rc genhtml_function_coverage=1 00:06:05.608 --rc genhtml_legend=1 00:06:05.608 --rc geninfo_all_blocks=1 00:06:05.608 --rc geninfo_unexecuted_blocks=1 00:06:05.608 00:06:05.608 ' 00:06:05.608 12:17:38 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.608 --rc genhtml_branch_coverage=1 00:06:05.608 --rc genhtml_function_coverage=1 00:06:05.608 --rc genhtml_legend=1 00:06:05.608 --rc geninfo_all_blocks=1 00:06:05.608 --rc geninfo_unexecuted_blocks=1 00:06:05.608 00:06:05.608 ' 00:06:05.608 12:17:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:05.608 12:17:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:05.608 12:17:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:05.608 12:17:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:05.608 12:17:38 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.608 12:17:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.608 ************************************ 00:06:05.608 START TEST nvmf_target_core 00:06:05.608 ************************************ 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:05.608 * Looking for test storage... 00:06:05.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.608 --rc genhtml_branch_coverage=1 00:06:05.608 --rc genhtml_function_coverage=1 00:06:05.608 --rc genhtml_legend=1 00:06:05.608 --rc geninfo_all_blocks=1 00:06:05.608 --rc geninfo_unexecuted_blocks=1 00:06:05.608 00:06:05.608 ' 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.608 --rc genhtml_branch_coverage=1 00:06:05.608 --rc genhtml_function_coverage=1 00:06:05.608 --rc genhtml_legend=1 00:06:05.608 --rc geninfo_all_blocks=1 00:06:05.608 --rc geninfo_unexecuted_blocks=1 00:06:05.608 00:06:05.608 ' 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.608 --rc genhtml_branch_coverage=1 00:06:05.608 --rc genhtml_function_coverage=1 00:06:05.608 --rc genhtml_legend=1 00:06:05.608 --rc geninfo_all_blocks=1 00:06:05.608 --rc geninfo_unexecuted_blocks=1 00:06:05.608 00:06:05.608 ' 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.608 --rc genhtml_branch_coverage=1 00:06:05.608 --rc genhtml_function_coverage=1 00:06:05.608 --rc genhtml_legend=1 00:06:05.608 --rc geninfo_all_blocks=1 00:06:05.608 --rc geninfo_unexecuted_blocks=1 00:06:05.608 00:06:05.608 ' 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.608 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:05.609 ************************************ 00:06:05.609 START TEST nvmf_abort 00:06:05.609 ************************************ 00:06:05.609 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:05.868 * Looking for test storage... 00:06:05.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.868 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.868 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.868 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.869 --rc genhtml_branch_coverage=1 00:06:05.869 --rc genhtml_function_coverage=1 00:06:05.869 --rc genhtml_legend=1 00:06:05.869 --rc geninfo_all_blocks=1 00:06:05.869 --rc geninfo_unexecuted_blocks=1 00:06:05.869 00:06:05.869 ' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.869 --rc genhtml_branch_coverage=1 00:06:05.869 --rc genhtml_function_coverage=1 00:06:05.869 --rc genhtml_legend=1 00:06:05.869 --rc geninfo_all_blocks=1 00:06:05.869 --rc geninfo_unexecuted_blocks=1 00:06:05.869 00:06:05.869 ' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.869 --rc genhtml_branch_coverage=1 00:06:05.869 --rc genhtml_function_coverage=1 00:06:05.869 --rc genhtml_legend=1 00:06:05.869 --rc geninfo_all_blocks=1 00:06:05.869 --rc geninfo_unexecuted_blocks=1 00:06:05.869 00:06:05.869 ' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.869 --rc genhtml_branch_coverage=1 00:06:05.869 --rc genhtml_function_coverage=1 00:06:05.869 --rc genhtml_legend=1 00:06:05.869 --rc geninfo_all_blocks=1 00:06:05.869 --rc geninfo_unexecuted_blocks=1 00:06:05.869 00:06:05.869 ' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:05.869 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:05.870 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:05.870 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:08.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:08.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:08.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:08.413 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:08.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:08.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:06:08.413 00:06:08.413 --- 10.0.0.2 ping statistics --- 00:06:08.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.413 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:08.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:08.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:06:08.413 00:06:08.413 --- 10.0.0.1 ping statistics --- 00:06:08.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.413 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:08.413 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=497431 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 497431 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 497431 ']' 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.414 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.414 [2024-10-30 12:17:40.768380] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:06:08.414 [2024-10-30 12:17:40.768465] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:08.414 [2024-10-30 12:17:40.841958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.414 [2024-10-30 12:17:40.898685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:08.414 [2024-10-30 12:17:40.898744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:08.414 [2024-10-30 12:17:40.898772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.414 [2024-10-30 12:17:40.898783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.414 [2024-10-30 12:17:40.898796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:08.414 [2024-10-30 12:17:40.900148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.414 [2024-10-30 12:17:40.900325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.414 [2024-10-30 12:17:40.900321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.414 [2024-10-30 12:17:41.048066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.414 Malloc0 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.414 Delay0 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.414 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.672 [2024-10-30 12:17:41.114472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.672 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:08.672 [2024-10-30 12:17:41.270360] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:11.204 Initializing NVMe Controllers 00:06:11.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:11.204 controller IO queue size 128 less than required 00:06:11.204 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:11.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:11.204 Initialization complete. Launching workers. 00:06:11.204 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28118 00:06:11.204 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28179, failed to submit 62 00:06:11.204 success 28122, unsuccessful 57, failed 0 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:11.204 rmmod nvme_tcp 00:06:11.204 rmmod nvme_fabrics 00:06:11.204 rmmod nvme_keyring 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 497431 ']' 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 497431 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 497431 ']' 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 497431 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 497431 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 497431' 00:06:11.204 killing process with pid 497431 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 497431 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 497431 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:11.204 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:13.112 00:06:13.112 real 0m7.431s 00:06:13.112 user 0m10.698s 00:06:13.112 sys 0m2.546s 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:13.112 ************************************ 00:06:13.112 END TEST nvmf_abort 00:06:13.112 ************************************ 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:13.112 ************************************ 00:06:13.112 START TEST nvmf_ns_hotplug_stress 00:06:13.112 ************************************ 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:13.112 * Looking for test storage... 00:06:13.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:13.112 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:13.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.371 --rc genhtml_branch_coverage=1 00:06:13.371 --rc genhtml_function_coverage=1 00:06:13.371 --rc genhtml_legend=1 00:06:13.371 --rc geninfo_all_blocks=1 00:06:13.371 --rc geninfo_unexecuted_blocks=1 00:06:13.371 00:06:13.371 ' 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:13.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.371 --rc genhtml_branch_coverage=1 00:06:13.371 --rc genhtml_function_coverage=1 00:06:13.371 --rc genhtml_legend=1 00:06:13.371 --rc geninfo_all_blocks=1 00:06:13.371 --rc geninfo_unexecuted_blocks=1 00:06:13.371 00:06:13.371 ' 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:13.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.371 --rc genhtml_branch_coverage=1 00:06:13.371 --rc genhtml_function_coverage=1 00:06:13.371 --rc genhtml_legend=1 00:06:13.371 --rc geninfo_all_blocks=1 00:06:13.371 --rc geninfo_unexecuted_blocks=1 00:06:13.371 00:06:13.371 ' 00:06:13.371 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:13.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.371 --rc genhtml_branch_coverage=1 00:06:13.372 --rc genhtml_function_coverage=1 00:06:13.372 --rc genhtml_legend=1 00:06:13.372 --rc geninfo_all_blocks=1 00:06:13.372 --rc geninfo_unexecuted_blocks=1 00:06:13.372 00:06:13.372 ' 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:13.372 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:15.277 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:15.277 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:15.277 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:15.277 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:15.277 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:15.278 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.278 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.278 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:15.278 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:15.278 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.278 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.536 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.536 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.536 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:15.536 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:15.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:06:15.536 00:06:15.536 --- 10.0.0.2 ping statistics --- 00:06:15.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.536 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:06:15.536 00:06:15.536 --- 10.0.0.1 ping statistics --- 00:06:15.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.536 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=499797 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 499797 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 499797 ']' 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:15.536 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.536 [2024-10-30 12:17:48.129564] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:06:15.536 [2024-10-30 12:17:48.129680] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.536 [2024-10-30 12:17:48.207655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.794 [2024-10-30 12:17:48.271752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.794 [2024-10-30 12:17:48.271806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.794 [2024-10-30 12:17:48.271835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.794 [2024-10-30 12:17:48.271847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.794 [2024-10-30 12:17:48.271857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.794 [2024-10-30 12:17:48.273377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.794 [2024-10-30 12:17:48.275278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.794 [2024-10-30 12:17:48.275291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.794 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.794 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:06:15.794 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:15.794 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.794 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.794 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.794 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:15.794 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:16.052 [2024-10-30 12:17:48.655234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.052 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:16.309 12:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.568 [2024-10-30 12:17:49.214224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.568 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:16.826 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:17.393 Malloc0 00:06:17.393 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:17.393 Delay0 00:06:17.650 12:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.908 12:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:18.165 NULL1 00:06:18.165 12:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:18.423 12:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=500097 00:06:18.423 12:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:18.423 12:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:18.423 12:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.681 12:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.939 12:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:18.939 12:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:19.197 true 00:06:19.197 12:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:19.197 12:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.455 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.713 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:19.713 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:19.970 true 00:06:19.970 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:19.970 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.228 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.487 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:20.487 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:20.745 true 00:06:20.745 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:20.745 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.119 Read completed with error (sct=0, sc=11) 00:06:22.119 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.119 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:22.119 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:22.376 true 00:06:22.376 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:22.376 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.634 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.892 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:22.892 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:23.151 true 00:06:23.151 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:23.151 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.418 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.676 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:23.676 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:23.934 true 00:06:23.934 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:23.934 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.868 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.125 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:25.125 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:25.382 true 00:06:25.382 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:25.382 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.638 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.895 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:25.895 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:26.153 true 00:06:26.153 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:26.153 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.412 12:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.978 12:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:26.978 12:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:26.978 true 00:06:26.978 12:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:26.979 12:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.909 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.425 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:28.425 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:28.683 true 00:06:28.683 12:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:28.683 12:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.941 12:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.199 12:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:29.199 12:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:29.457 true 00:06:29.457 12:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:29.457 12:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.715 12:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.973 12:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:29.973 12:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:30.231 true 00:06:30.231 12:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:30.231 12:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.164 12:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.422 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:31.422 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:31.679 true 00:06:31.679 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:31.679 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.937 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.503 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:32.503 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:32.503 true 00:06:32.503 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:32.503 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.067 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.067 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:33.067 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:33.324 true 00:06:33.324 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:33.324 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.610 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.610 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.610 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:34.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:34.866 true 00:06:35.123 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:35.123 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.379 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.634 12:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:35.634 12:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:35.890 true 00:06:35.890 12:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:35.890 12:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.147 12:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.404 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:36.404 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:36.661 true 00:06:36.661 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:36.661 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.592 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.849 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:37.849 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:38.107 true 00:06:38.107 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:38.107 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.366 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.624 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:38.624 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:38.881 true 00:06:38.881 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:38.881 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.814 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.072 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:40.072 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:40.331 true 00:06:40.331 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:40.331 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.589 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.153 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:41.153 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:41.153 true 00:06:41.153 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:41.153 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.085 12:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.341 12:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:42.341 12:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:42.598 true 00:06:42.598 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:42.598 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.856 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.113 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:43.113 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:43.371 true 00:06:43.371 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:43.371 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.629 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.886 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:43.886 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:44.143 true 00:06:44.143 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:44.143 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.517 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.517 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:45.517 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:45.775 true 00:06:45.775 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:45.775 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.033 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.291 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:46.291 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:46.548 true 00:06:46.548 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:46.548 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.807 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.066 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:47.066 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:47.324 true 00:06:47.324 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:47.324 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.259 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.824 12:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:48.824 12:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:48.824 Initializing NVMe Controllers 00:06:48.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:48.824 Controller IO queue size 128, less than required. 00:06:48.824 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.824 Controller IO queue size 128, less than required. 00:06:48.824 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:48.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:48.824 Initialization complete. Launching workers. 00:06:48.824 ======================================================== 00:06:48.824 Latency(us) 00:06:48.824 Device Information : IOPS MiB/s Average min max 00:06:48.824 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 590.47 0.29 81750.63 2558.08 1012083.50 00:06:48.824 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7898.04 3.86 16157.04 2220.18 544998.90 00:06:48.824 ======================================================== 00:06:48.824 Total : 8488.52 4.14 20719.82 2220.18 1012083.50 00:06:48.824 00:06:48.824 true 00:06:48.824 12:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 500097 00:06:48.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (500097) - No such process 00:06:48.824 12:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 500097 00:06:48.824 12:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.081 12:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.338 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:49.338 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:49.338 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:49.338 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.338 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:49.596 null0 00:06:49.854 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.854 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.854 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:50.111 null1 00:06:50.111 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.111 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.111 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:50.368 null2 00:06:50.368 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.368 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.368 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:50.626 null3 00:06:50.626 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.626 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.626 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:50.884 null4 00:06:50.884 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.884 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.884 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:51.142 null5 00:06:51.142 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.142 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.142 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:51.399 null6 00:06:51.399 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.399 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.399 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:51.661 null7 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.661 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 504794 504795 504797 504799 504801 504803 504805 504807 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.662 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.922 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.922 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.922 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.922 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.922 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.922 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.922 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.922 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.180 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.438 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.438 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.438 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.438 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.438 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.438 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.438 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.438 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.697 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.697 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.697 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.954 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.954 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.954 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.955 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.213 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.213 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.213 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.213 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.213 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.213 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.213 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.213 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.472 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.730 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.730 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.730 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.730 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.730 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.730 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.730 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.730 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.988 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.988 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.988 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.988 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.988 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.989 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.247 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.247 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.247 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.247 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.247 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.248 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.248 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.248 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.506 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.764 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.764 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.764 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.764 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.764 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.764 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.040 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.040 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.040 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.040 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.040 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.040 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.040 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.040 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.561 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.561 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.561 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.561 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.561 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.562 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.562 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.562 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.079 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.079 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.079 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.079 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.079 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.079 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.079 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.079 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.596 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.596 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.596 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.596 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.596 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.596 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.596 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.855 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.113 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.114 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.114 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.114 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.114 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.114 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.114 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.114 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.114 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.114 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.372 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.372 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.372 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.372 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.372 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.372 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.372 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.372 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.629 rmmod nvme_tcp 00:06:57.629 rmmod nvme_fabrics 00:06:57.629 rmmod nvme_keyring 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 499797 ']' 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 499797 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 499797 ']' 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 499797 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 499797 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 499797' 00:06:57.629 killing process with pid 499797 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 499797 00:06:57.629 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 499797 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.889 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:00.433 00:07:00.433 real 0m46.795s 00:07:00.433 user 3m39.533s 00:07:00.433 sys 0m15.586s 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.433 ************************************ 00:07:00.433 END TEST nvmf_ns_hotplug_stress 00:07:00.433 ************************************ 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.433 ************************************ 00:07:00.433 START TEST nvmf_delete_subsystem 00:07:00.433 ************************************ 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:00.433 * Looking for test storage... 00:07:00.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.433 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:00.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.434 --rc genhtml_branch_coverage=1 00:07:00.434 --rc genhtml_function_coverage=1 00:07:00.434 --rc genhtml_legend=1 00:07:00.434 --rc geninfo_all_blocks=1 00:07:00.434 --rc geninfo_unexecuted_blocks=1 00:07:00.434 00:07:00.434 ' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:00.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.434 --rc genhtml_branch_coverage=1 00:07:00.434 --rc genhtml_function_coverage=1 00:07:00.434 --rc genhtml_legend=1 00:07:00.434 --rc geninfo_all_blocks=1 00:07:00.434 --rc geninfo_unexecuted_blocks=1 00:07:00.434 00:07:00.434 ' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:00.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.434 --rc genhtml_branch_coverage=1 00:07:00.434 --rc genhtml_function_coverage=1 00:07:00.434 --rc genhtml_legend=1 00:07:00.434 --rc geninfo_all_blocks=1 00:07:00.434 --rc geninfo_unexecuted_blocks=1 00:07:00.434 00:07:00.434 ' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:00.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.434 --rc genhtml_branch_coverage=1 00:07:00.434 --rc genhtml_function_coverage=1 00:07:00.434 --rc genhtml_legend=1 00:07:00.434 --rc geninfo_all_blocks=1 00:07:00.434 --rc geninfo_unexecuted_blocks=1 00:07:00.434 00:07:00.434 ' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:00.434 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:00.435 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.336 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:02.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:02.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:02.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:02.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.337 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:02.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:07:02.338 00:07:02.338 --- 10.0.0.2 ping statistics --- 00:07:02.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.338 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:07:02.338 00:07:02.338 --- 10.0.0.1 ping statistics --- 00:07:02.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.338 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=507703 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 507703 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 507703 ']' 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:02.338 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.338 [2024-10-30 12:18:34.920673] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:07:02.338 [2024-10-30 12:18:34.920763] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.338 [2024-10-30 12:18:34.993519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.597 [2024-10-30 12:18:35.050296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.597 [2024-10-30 12:18:35.050368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.597 [2024-10-30 12:18:35.050397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.597 [2024-10-30 12:18:35.050407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.597 [2024-10-30 12:18:35.050417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.597 [2024-10-30 12:18:35.054278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.597 [2024-10-30 12:18:35.054284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 [2024-10-30 12:18:35.189428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 [2024-10-30 12:18:35.205658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 NULL1 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 Delay0 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=507725 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:02.597 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:02.855 [2024-10-30 12:18:35.290379] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:04.756 12:18:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:04.756 12:18:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.756 12:18:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Write completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 Read completed with error (sct=0, sc=8) 00:07:04.756 starting I/O failed: -6 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 [2024-10-30 12:18:37.372530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de1680 is same with the state(6) to be set 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 starting I/O failed: -6 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 starting I/O failed: -6 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 starting I/O failed: -6 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 starting I/O failed: -6 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 starting I/O failed: -6 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 starting I/O failed: -6 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 starting I/O failed: -6 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 starting I/O failed: -6 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 starting I/O failed: -6 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 starting I/O failed: -6 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 [2024-10-30 12:18:37.373283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8c5000cfe0 is same with the state(6) to be set 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Write completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:04.757 Read completed with error (sct=0, sc=8) 00:07:05.690 [2024-10-30 12:18:38.344811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de29a0 is same with the state(6) to be set 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 [2024-10-30 12:18:38.375040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8c5000d310 is same with the state(6) to be set 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 [2024-10-30 12:18:38.377083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de14a0 is same with the state(6) to be set 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 [2024-10-30 12:18:38.377289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de1860 is same with the state(6) to be set 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Read completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 Write completed with error (sct=0, sc=8) 00:07:05.948 [2024-10-30 12:18:38.377486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de12c0 is same with the state(6) to be set 00:07:05.948 Initializing NVMe Controllers 00:07:05.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:05.948 Controller IO queue size 128, less than required. 00:07:05.948 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:05.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:05.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:05.948 Initialization complete. Launching workers. 00:07:05.948 ======================================================== 00:07:05.948 Latency(us) 00:07:05.948 Device Information : IOPS MiB/s Average min max 00:07:05.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.68 0.09 961981.86 783.30 1044549.37 00:07:05.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.83 0.08 875830.63 392.55 1013174.77 00:07:05.948 ======================================================== 00:07:05.948 Total : 332.51 0.16 921606.50 392.55 1044549.37 00:07:05.948 00:07:05.948 [2024-10-30 12:18:38.378355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de29a0 (9): Bad file descriptor 00:07:05.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:05.948 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.948 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:05.948 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 507725 00:07:05.948 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 507725 00:07:06.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (507725) - No such process 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 507725 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 507725 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 507725 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.207 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.466 [2024-10-30 12:18:38.896030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=508133 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 508133 00:07:06.466 12:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.466 [2024-10-30 12:18:38.962920] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:07.031 12:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.031 12:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 508133 00:07:07.031 12:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.288 12:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.288 12:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 508133 00:07:07.288 12:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.852 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.852 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 508133 00:07:07.852 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.416 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.416 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 508133 00:07:08.416 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.983 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.983 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 508133 00:07:08.983 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.548 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.548 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 508133 00:07:09.548 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.548 Initializing NVMe Controllers 00:07:09.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.548 Controller IO queue size 128, less than required. 00:07:09.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:09.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:09.549 Initialization complete. Launching workers. 00:07:09.549 ======================================================== 00:07:09.549 Latency(us) 00:07:09.549 Device Information : IOPS MiB/s Average min max 00:07:09.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004332.01 1000179.26 1012896.82 00:07:09.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004415.07 1000145.02 1013257.36 00:07:09.549 ======================================================== 00:07:09.549 Total : 256.00 0.12 1004373.54 1000145.02 1013257.36 00:07:09.549 00:07:09.806 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.806 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 508133 00:07:09.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (508133) - No such process 00:07:09.806 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 508133 00:07:09.806 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:09.806 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:09.806 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:09.806 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:09.806 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:09.806 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:09.807 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:09.807 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:09.807 rmmod nvme_tcp 00:07:09.807 rmmod nvme_fabrics 00:07:09.807 rmmod nvme_keyring 00:07:09.807 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:09.807 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:09.807 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:09.807 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 507703 ']' 00:07:09.807 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 507703 00:07:09.807 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 507703 ']' 00:07:09.807 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 507703 00:07:09.807 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 507703 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 507703' 00:07:10.065 killing process with pid 507703 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 507703 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 507703 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.065 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:12.601 00:07:12.601 real 0m12.208s 00:07:12.601 user 0m27.573s 00:07:12.601 sys 0m2.779s 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.601 ************************************ 00:07:12.601 END TEST nvmf_delete_subsystem 00:07:12.601 ************************************ 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.601 ************************************ 00:07:12.601 START TEST nvmf_host_management 00:07:12.601 ************************************ 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:12.601 * Looking for test storage... 00:07:12.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.601 --rc genhtml_branch_coverage=1 00:07:12.601 --rc genhtml_function_coverage=1 00:07:12.601 --rc genhtml_legend=1 00:07:12.601 --rc geninfo_all_blocks=1 00:07:12.601 --rc geninfo_unexecuted_blocks=1 00:07:12.601 00:07:12.601 ' 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.601 --rc genhtml_branch_coverage=1 00:07:12.601 --rc genhtml_function_coverage=1 00:07:12.601 --rc genhtml_legend=1 00:07:12.601 --rc geninfo_all_blocks=1 00:07:12.601 --rc geninfo_unexecuted_blocks=1 00:07:12.601 00:07:12.601 ' 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.601 --rc genhtml_branch_coverage=1 00:07:12.601 --rc genhtml_function_coverage=1 00:07:12.601 --rc genhtml_legend=1 00:07:12.601 --rc geninfo_all_blocks=1 00:07:12.601 --rc geninfo_unexecuted_blocks=1 00:07:12.601 00:07:12.601 ' 00:07:12.601 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:12.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.602 --rc genhtml_branch_coverage=1 00:07:12.602 --rc genhtml_function_coverage=1 00:07:12.602 --rc genhtml_legend=1 00:07:12.602 --rc geninfo_all_blocks=1 00:07:12.602 --rc geninfo_unexecuted_blocks=1 00:07:12.602 00:07:12.602 ' 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:12.602 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:14.507 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:14.507 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:14.507 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:14.507 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:14.507 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.508 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.508 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:14.508 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:14.508 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.508 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.508 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.508 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.508 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:14.508 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:14.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:07:14.766 00:07:14.766 --- 10.0.0.2 ping statistics --- 00:07:14.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.766 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:07:14.766 00:07:14.766 --- 10.0.0.1 ping statistics --- 00:07:14.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.766 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=510606 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 510606 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 510606 ']' 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:14.766 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.767 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:14.767 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.767 [2024-10-30 12:18:47.300545] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:07:14.767 [2024-10-30 12:18:47.300632] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.767 [2024-10-30 12:18:47.372896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.767 [2024-10-30 12:18:47.429505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.767 [2024-10-30 12:18:47.429578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.767 [2024-10-30 12:18:47.429601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.767 [2024-10-30 12:18:47.429611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.767 [2024-10-30 12:18:47.429620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.767 [2024-10-30 12:18:47.431162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.767 [2024-10-30 12:18:47.431287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.767 [2024-10-30 12:18:47.431365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:14.767 [2024-10-30 12:18:47.431368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.025 [2024-10-30 12:18:47.577698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.025 Malloc0 00:07:15.025 [2024-10-30 12:18:47.657197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=510658 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 510658 /var/tmp/bdevperf.sock 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 510658 ']' 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:15.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:15.025 { 00:07:15.025 "params": { 00:07:15.025 "name": "Nvme$subsystem", 00:07:15.025 "trtype": "$TEST_TRANSPORT", 00:07:15.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:15.025 "adrfam": "ipv4", 00:07:15.025 "trsvcid": "$NVMF_PORT", 00:07:15.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:15.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:15.025 "hdgst": ${hdgst:-false}, 00:07:15.025 "ddgst": ${ddgst:-false} 00:07:15.025 }, 00:07:15.025 "method": "bdev_nvme_attach_controller" 00:07:15.025 } 00:07:15.025 EOF 00:07:15.025 )") 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:15.025 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:15.025 "params": { 00:07:15.025 "name": "Nvme0", 00:07:15.025 "trtype": "tcp", 00:07:15.025 "traddr": "10.0.0.2", 00:07:15.025 "adrfam": "ipv4", 00:07:15.025 "trsvcid": "4420", 00:07:15.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:15.025 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:15.025 "hdgst": false, 00:07:15.025 "ddgst": false 00:07:15.025 }, 00:07:15.025 "method": "bdev_nvme_attach_controller" 00:07:15.025 }' 00:07:15.283 [2024-10-30 12:18:47.740331] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:07:15.283 [2024-10-30 12:18:47.740427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510658 ] 00:07:15.283 [2024-10-30 12:18:47.809589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.283 [2024-10-30 12:18:47.868960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.541 Running I/O for 10 seconds... 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:15.799 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.059 [2024-10-30 12:18:48.600427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf0f10 is same with the state(6) to be set 00:07:16.059 [2024-10-30 12:18:48.603785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.059 [2024-10-30 12:18:48.603827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.603853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.059 [2024-10-30 12:18:48.603868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.603881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.059 [2024-10-30 12:18:48.603894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.603908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:16.059 [2024-10-30 12:18:48.603921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.603934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9a40 is same with the state(6) to be set 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.059 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.059 [2024-10-30 12:18:48.609130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.059 [2024-10-30 12:18:48.609887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.059 [2024-10-30 12:18:48.609900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.609915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.609928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.609943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.609956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.609970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.609984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.609998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.060 [2024-10-30 12:18:48.610942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.060 [2024-10-30 12:18:48.610955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.061 [2024-10-30 12:18:48.610970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.061 [2024-10-30 12:18:48.610983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.061 [2024-10-30 12:18:48.610998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.061 [2024-10-30 12:18:48.611012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.061 [2024-10-30 12:18:48.611027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.061 [2024-10-30 12:18:48.611041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.061 [2024-10-30 12:18:48.611055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.061 [2024-10-30 12:18:48.611068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.061 [2024-10-30 12:18:48.611083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.061 [2024-10-30 12:18:48.611097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.061 [2024-10-30 12:18:48.612313] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:16.061 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.061 12:18:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:16.061 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:16.061 00:07:16.061 Latency(us) 00:07:16.061 [2024-10-30T11:18:48.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.061 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:16.061 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:16.061 Verification LBA range: start 0x0 length 0x400 00:07:16.061 Nvme0n1 : 0.40 1605.08 100.32 160.51 0.00 35184.70 2415.12 34758.35 00:07:16.061 [2024-10-30T11:18:48.742Z] =================================================================================================================== 00:07:16.061 [2024-10-30T11:18:48.742Z] Total : 1605.08 100.32 160.51 0.00 35184.70 2415.12 34758.35 00:07:16.061 [2024-10-30 12:18:48.614175] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.061 [2024-10-30 12:18:48.614206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec9a40 (9): Bad file descriptor 00:07:16.061 [2024-10-30 12:18:48.663448] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 510658 00:07:16.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (510658) - No such process 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:16.996 { 00:07:16.996 "params": { 00:07:16.996 "name": "Nvme$subsystem", 00:07:16.996 "trtype": "$TEST_TRANSPORT", 00:07:16.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:16.996 "adrfam": "ipv4", 00:07:16.996 "trsvcid": "$NVMF_PORT", 00:07:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:16.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:16.996 "hdgst": ${hdgst:-false}, 00:07:16.996 "ddgst": ${ddgst:-false} 00:07:16.996 }, 00:07:16.996 "method": "bdev_nvme_attach_controller" 00:07:16.996 } 00:07:16.996 EOF 00:07:16.996 )") 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:16.996 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:16.996 "params": { 00:07:16.996 "name": "Nvme0", 00:07:16.996 "trtype": "tcp", 00:07:16.996 "traddr": "10.0.0.2", 00:07:16.996 "adrfam": "ipv4", 00:07:16.996 "trsvcid": "4420", 00:07:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:16.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:16.996 "hdgst": false, 00:07:16.996 "ddgst": false 00:07:16.996 }, 00:07:16.996 "method": "bdev_nvme_attach_controller" 00:07:16.996 }' 00:07:16.996 [2024-10-30 12:18:49.664280] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:07:16.996 [2024-10-30 12:18:49.664384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510930 ] 00:07:17.253 [2024-10-30 12:18:49.733657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.253 [2024-10-30 12:18:49.792404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.510 Running I/O for 1 seconds... 00:07:18.884 1536.00 IOPS, 96.00 MiB/s 00:07:18.884 Latency(us) 00:07:18.884 [2024-10-30T11:18:51.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.884 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:18.884 Verification LBA range: start 0x0 length 0x400 00:07:18.884 Nvme0n1 : 1.01 1580.67 98.79 0.00 0.00 39841.24 5679.79 36311.80 00:07:18.884 [2024-10-30T11:18:51.565Z] =================================================================================================================== 00:07:18.884 [2024-10-30T11:18:51.565Z] Total : 1580.67 98.79 0.00 0.00 39841.24 5679.79 36311.80 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:18.884 rmmod nvme_tcp 00:07:18.884 rmmod nvme_fabrics 00:07:18.884 rmmod nvme_keyring 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 510606 ']' 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 510606 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 510606 ']' 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 510606 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 510606 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 510606' 00:07:18.884 killing process with pid 510606 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 510606 00:07:18.884 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 510606 00:07:19.144 [2024-10-30 12:18:51.707245] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.144 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:21.681 00:07:21.681 real 0m8.942s 00:07:21.681 user 0m20.369s 00:07:21.681 sys 0m2.725s 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.681 ************************************ 00:07:21.681 END TEST nvmf_host_management 00:07:21.681 ************************************ 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.681 ************************************ 00:07:21.681 START TEST nvmf_lvol 00:07:21.681 ************************************ 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:21.681 * Looking for test storage... 00:07:21.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:21.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.681 --rc genhtml_branch_coverage=1 00:07:21.681 --rc genhtml_function_coverage=1 00:07:21.681 --rc genhtml_legend=1 00:07:21.681 --rc geninfo_all_blocks=1 00:07:21.681 --rc geninfo_unexecuted_blocks=1 00:07:21.681 00:07:21.681 ' 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:21.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.681 --rc genhtml_branch_coverage=1 00:07:21.681 --rc genhtml_function_coverage=1 00:07:21.681 --rc genhtml_legend=1 00:07:21.681 --rc geninfo_all_blocks=1 00:07:21.681 --rc geninfo_unexecuted_blocks=1 00:07:21.681 00:07:21.681 ' 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:21.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.681 --rc genhtml_branch_coverage=1 00:07:21.681 --rc genhtml_function_coverage=1 00:07:21.681 --rc genhtml_legend=1 00:07:21.681 --rc geninfo_all_blocks=1 00:07:21.681 --rc geninfo_unexecuted_blocks=1 00:07:21.681 00:07:21.681 ' 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:21.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.681 --rc genhtml_branch_coverage=1 00:07:21.681 --rc genhtml_function_coverage=1 00:07:21.681 --rc genhtml_legend=1 00:07:21.681 --rc geninfo_all_blocks=1 00:07:21.681 --rc geninfo_unexecuted_blocks=1 00:07:21.681 00:07:21.681 ' 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.681 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.682 12:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.682 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:23.588 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:23.589 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:23.589 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:23.589 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:23.589 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.589 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.848 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.848 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.848 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:23.848 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.848 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.848 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.848 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:23.848 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:23.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:07:23.848 00:07:23.848 --- 10.0.0.2 ping statistics --- 00:07:23.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.848 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:23.848 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:07:23.848 00:07:23.848 --- 10.0.0.1 ping statistics --- 00:07:23.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.848 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:23.848 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=513146 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 513146 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 513146 ']' 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.849 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:23.849 [2024-10-30 12:18:56.415213] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:07:23.849 [2024-10-30 12:18:56.415322] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.849 [2024-10-30 12:18:56.488750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.107 [2024-10-30 12:18:56.549295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.107 [2024-10-30 12:18:56.549351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.107 [2024-10-30 12:18:56.549364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.107 [2024-10-30 12:18:56.549375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.107 [2024-10-30 12:18:56.549390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.107 [2024-10-30 12:18:56.550824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.107 [2024-10-30 12:18:56.550853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.107 [2024-10-30 12:18:56.550857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.107 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.107 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:24.107 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:24.107 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.107 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:24.107 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.107 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:24.365 [2024-10-30 12:18:56.933829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.365 12:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:24.623 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:24.623 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:24.881 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:24.881 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:25.139 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:25.704 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8047a342-d7a2-4c1d-b040-f7d03e3937d1 00:07:25.704 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8047a342-d7a2-4c1d-b040-f7d03e3937d1 lvol 20 00:07:25.704 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bedbcb1e-b8f2-49fa-b0aa-a106701db021 00:07:25.704 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.962 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bedbcb1e-b8f2-49fa-b0aa-a106701db021 00:07:26.534 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:26.535 [2024-10-30 12:18:59.142220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.535 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.838 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=513458 00:07:26.838 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:26.838 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:27.845 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bedbcb1e-b8f2-49fa-b0aa-a106701db021 MY_SNAPSHOT 00:07:28.103 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8abc7a2c-833b-49da-948b-59fe34134ade 00:07:28.103 12:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bedbcb1e-b8f2-49fa-b0aa-a106701db021 30 00:07:28.669 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8abc7a2c-833b-49da-948b-59fe34134ade MY_CLONE 00:07:28.927 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=86d1f13b-b9dc-4f21-a2f0-3235c1f53d32 00:07:28.927 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 86d1f13b-b9dc-4f21-a2f0-3235c1f53d32 00:07:29.492 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 513458 00:07:37.602 Initializing NVMe Controllers 00:07:37.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:37.602 Controller IO queue size 128, less than required. 00:07:37.602 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:37.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:37.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:37.602 Initialization complete. Launching workers. 00:07:37.602 ======================================================== 00:07:37.602 Latency(us) 00:07:37.602 Device Information : IOPS MiB/s Average min max 00:07:37.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10571.90 41.30 12109.24 1782.22 58196.17 00:07:37.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10496.50 41.00 12202.63 2220.48 53494.81 00:07:37.602 ======================================================== 00:07:37.602 Total : 21068.40 82.30 12155.77 1782.22 58196.17 00:07:37.602 00:07:37.602 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.602 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bedbcb1e-b8f2-49fa-b0aa-a106701db021 00:07:37.859 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8047a342-d7a2-4c1d-b040-f7d03e3937d1 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.117 rmmod nvme_tcp 00:07:38.117 rmmod nvme_fabrics 00:07:38.117 rmmod nvme_keyring 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 513146 ']' 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 513146 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 513146 ']' 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 513146 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 513146 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 513146' 00:07:38.117 killing process with pid 513146 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 513146 00:07:38.117 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 513146 00:07:38.375 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:38.375 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:38.375 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:38.375 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:38.376 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:38.376 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:38.376 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:38.376 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:38.376 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:38.376 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.376 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.376 12:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.917 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.917 00:07:40.917 real 0m19.241s 00:07:40.917 user 1m5.576s 00:07:40.917 sys 0m5.479s 00:07:40.917 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:40.918 ************************************ 00:07:40.918 END TEST nvmf_lvol 00:07:40.918 ************************************ 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.918 ************************************ 00:07:40.918 START TEST nvmf_lvs_grow 00:07:40.918 ************************************ 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:40.918 * Looking for test storage... 00:07:40.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:40.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.918 --rc genhtml_branch_coverage=1 00:07:40.918 --rc genhtml_function_coverage=1 00:07:40.918 --rc genhtml_legend=1 00:07:40.918 --rc geninfo_all_blocks=1 00:07:40.918 --rc geninfo_unexecuted_blocks=1 00:07:40.918 00:07:40.918 ' 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:40.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.918 --rc genhtml_branch_coverage=1 00:07:40.918 --rc genhtml_function_coverage=1 00:07:40.918 --rc genhtml_legend=1 00:07:40.918 --rc geninfo_all_blocks=1 00:07:40.918 --rc geninfo_unexecuted_blocks=1 00:07:40.918 00:07:40.918 ' 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:40.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.918 --rc genhtml_branch_coverage=1 00:07:40.918 --rc genhtml_function_coverage=1 00:07:40.918 --rc genhtml_legend=1 00:07:40.918 --rc geninfo_all_blocks=1 00:07:40.918 --rc geninfo_unexecuted_blocks=1 00:07:40.918 00:07:40.918 ' 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:40.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.918 --rc genhtml_branch_coverage=1 00:07:40.918 --rc genhtml_function_coverage=1 00:07:40.918 --rc genhtml_legend=1 00:07:40.918 --rc geninfo_all_blocks=1 00:07:40.918 --rc geninfo_unexecuted_blocks=1 00:07:40.918 00:07:40.918 ' 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.918 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.919 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:42.823 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:42.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:42.823 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:42.823 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:42.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:07:42.823 00:07:42.823 --- 10.0.0.2 ping statistics --- 00:07:42.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.823 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:07:42.823 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:42.823 00:07:42.823 --- 10.0.0.1 ping statistics --- 00:07:42.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.823 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=516862 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 516862 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 516862 ']' 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.824 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.824 [2024-10-30 12:19:15.495498] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:07:42.824 [2024-10-30 12:19:15.495601] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.083 [2024-10-30 12:19:15.569491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.083 [2024-10-30 12:19:15.627394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.083 [2024-10-30 12:19:15.627458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.083 [2024-10-30 12:19:15.627471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.083 [2024-10-30 12:19:15.627482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.083 [2024-10-30 12:19:15.627491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.083 [2024-10-30 12:19:15.628100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.083 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.083 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:43.083 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:43.083 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:43.083 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.341 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.341 12:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:43.341 [2024-10-30 12:19:16.011474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.599 ************************************ 00:07:43.599 START TEST lvs_grow_clean 00:07:43.599 ************************************ 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:43.599 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.857 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:43.858 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:44.116 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=57c928da-ef51-475f-85a9-cf9b86c7e60d 00:07:44.116 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:07:44.116 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:44.374 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:44.374 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:44.374 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 57c928da-ef51-475f-85a9-cf9b86c7e60d lvol 150 00:07:44.633 12:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b6ea5e95-6b65-4356-b0af-bff345bedaaf 00:07:44.633 12:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:44.633 12:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:44.892 [2024-10-30 12:19:17.429670] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:44.892 [2024-10-30 12:19:17.429748] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:44.892 true 00:07:44.892 12:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:07:44.892 12:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:45.151 12:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:45.151 12:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:45.410 12:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b6ea5e95-6b65-4356-b0af-bff345bedaaf 00:07:45.668 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:45.925 [2024-10-30 12:19:18.525035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.926 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.184 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=517302 00:07:46.184 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:46.184 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:46.184 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 517302 /var/tmp/bdevperf.sock 00:07:46.184 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 517302 ']' 00:07:46.184 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:46.184 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.184 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:46.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:46.184 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.184 12:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:46.184 [2024-10-30 12:19:18.855601] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:07:46.184 [2024-10-30 12:19:18.855671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517302 ] 00:07:46.442 [2024-10-30 12:19:18.920350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.442 [2024-10-30 12:19:18.976523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.442 12:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.442 12:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:46.442 12:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:47.007 Nvme0n1 00:07:47.007 12:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:47.265 [ 00:07:47.265 { 00:07:47.265 "name": "Nvme0n1", 00:07:47.265 "aliases": [ 00:07:47.265 "b6ea5e95-6b65-4356-b0af-bff345bedaaf" 00:07:47.265 ], 00:07:47.265 "product_name": "NVMe disk", 00:07:47.265 "block_size": 4096, 00:07:47.265 "num_blocks": 38912, 00:07:47.265 "uuid": "b6ea5e95-6b65-4356-b0af-bff345bedaaf", 00:07:47.265 "numa_id": 0, 00:07:47.265 "assigned_rate_limits": { 00:07:47.265 "rw_ios_per_sec": 0, 00:07:47.265 "rw_mbytes_per_sec": 0, 00:07:47.265 "r_mbytes_per_sec": 0, 00:07:47.265 "w_mbytes_per_sec": 0 00:07:47.265 }, 00:07:47.265 "claimed": false, 00:07:47.265 "zoned": false, 00:07:47.265 "supported_io_types": { 00:07:47.265 "read": true, 00:07:47.265 "write": true, 00:07:47.265 "unmap": true, 00:07:47.265 "flush": true, 00:07:47.265 "reset": true, 00:07:47.265 "nvme_admin": true, 00:07:47.265 "nvme_io": true, 00:07:47.265 "nvme_io_md": false, 00:07:47.265 "write_zeroes": true, 00:07:47.265 "zcopy": false, 00:07:47.265 "get_zone_info": false, 00:07:47.265 "zone_management": false, 00:07:47.265 "zone_append": false, 00:07:47.265 "compare": true, 00:07:47.265 "compare_and_write": true, 00:07:47.265 "abort": true, 00:07:47.265 "seek_hole": false, 00:07:47.265 "seek_data": false, 00:07:47.265 "copy": true, 00:07:47.265 "nvme_iov_md": false 00:07:47.265 }, 00:07:47.265 "memory_domains": [ 00:07:47.265 { 00:07:47.265 "dma_device_id": "system", 00:07:47.265 "dma_device_type": 1 00:07:47.265 } 00:07:47.265 ], 00:07:47.265 "driver_specific": { 00:07:47.265 "nvme": [ 00:07:47.265 { 00:07:47.265 "trid": { 00:07:47.265 "trtype": "TCP", 00:07:47.265 "adrfam": "IPv4", 00:07:47.265 "traddr": "10.0.0.2", 00:07:47.265 "trsvcid": "4420", 00:07:47.265 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:47.265 }, 00:07:47.265 "ctrlr_data": { 00:07:47.265 "cntlid": 1, 00:07:47.265 "vendor_id": "0x8086", 00:07:47.265 "model_number": "SPDK bdev Controller", 00:07:47.265 "serial_number": "SPDK0", 00:07:47.265 "firmware_revision": "25.01", 00:07:47.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:47.265 "oacs": { 00:07:47.265 "security": 0, 00:07:47.265 "format": 0, 00:07:47.265 "firmware": 0, 00:07:47.265 "ns_manage": 0 00:07:47.265 }, 00:07:47.265 "multi_ctrlr": true, 00:07:47.265 "ana_reporting": false 00:07:47.265 }, 00:07:47.265 "vs": { 00:07:47.265 "nvme_version": "1.3" 00:07:47.265 }, 00:07:47.265 "ns_data": { 00:07:47.265 "id": 1, 00:07:47.265 "can_share": true 00:07:47.265 } 00:07:47.265 } 00:07:47.265 ], 00:07:47.265 "mp_policy": "active_passive" 00:07:47.265 } 00:07:47.265 } 00:07:47.265 ] 00:07:47.265 12:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=517324 00:07:47.265 12:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:47.265 12:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:47.265 Running I/O for 10 seconds... 00:07:48.639 Latency(us) 00:07:48.639 [2024-10-30T11:19:21.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.639 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:48.639 [2024-10-30T11:19:21.320Z] =================================================================================================================== 00:07:48.639 [2024-10-30T11:19:21.320Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:48.639 00:07:49.203 12:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:07:49.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.461 Nvme0n1 : 2.00 15211.00 59.42 0.00 0.00 0.00 0.00 0.00 00:07:49.461 [2024-10-30T11:19:22.142Z] =================================================================================================================== 00:07:49.461 [2024-10-30T11:19:22.142Z] Total : 15211.00 59.42 0.00 0.00 0.00 0.00 0.00 00:07:49.461 00:07:49.461 true 00:07:49.461 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:07:49.461 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:49.718 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:49.718 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:49.718 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 517324 00:07:50.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.283 Nvme0n1 : 3.00 15327.00 59.87 0.00 0.00 0.00 0.00 0.00 00:07:50.283 [2024-10-30T11:19:22.964Z] =================================================================================================================== 00:07:50.283 [2024-10-30T11:19:22.964Z] Total : 15327.00 59.87 0.00 0.00 0.00 0.00 0.00 00:07:50.283 00:07:51.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.671 Nvme0n1 : 4.00 15400.50 60.16 0.00 0.00 0.00 0.00 0.00 00:07:51.671 [2024-10-30T11:19:24.352Z] =================================================================================================================== 00:07:51.671 [2024-10-30T11:19:24.352Z] Total : 15400.50 60.16 0.00 0.00 0.00 0.00 0.00 00:07:51.671 00:07:52.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.603 Nvme0n1 : 5.00 15495.40 60.53 0.00 0.00 0.00 0.00 0.00 00:07:52.603 [2024-10-30T11:19:25.284Z] =================================================================================================================== 00:07:52.603 [2024-10-30T11:19:25.284Z] Total : 15495.40 60.53 0.00 0.00 0.00 0.00 0.00 00:07:52.603 00:07:53.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.537 Nvme0n1 : 6.00 15548.33 60.74 0.00 0.00 0.00 0.00 0.00 00:07:53.537 [2024-10-30T11:19:26.218Z] =================================================================================================================== 00:07:53.537 [2024-10-30T11:19:26.218Z] Total : 15548.33 60.74 0.00 0.00 0.00 0.00 0.00 00:07:53.537 00:07:54.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.471 Nvme0n1 : 7.00 15595.00 60.92 0.00 0.00 0.00 0.00 0.00 00:07:54.471 [2024-10-30T11:19:27.152Z] =================================================================================================================== 00:07:54.471 [2024-10-30T11:19:27.152Z] Total : 15595.00 60.92 0.00 0.00 0.00 0.00 0.00 00:07:54.471 00:07:55.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.404 Nvme0n1 : 8.00 15645.88 61.12 0.00 0.00 0.00 0.00 0.00 00:07:55.404 [2024-10-30T11:19:28.085Z] =================================================================================================================== 00:07:55.404 [2024-10-30T11:19:28.085Z] Total : 15645.88 61.12 0.00 0.00 0.00 0.00 0.00 00:07:55.404 00:07:56.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.338 Nvme0n1 : 9.00 15668.22 61.20 0.00 0.00 0.00 0.00 0.00 00:07:56.338 [2024-10-30T11:19:29.019Z] =================================================================================================================== 00:07:56.338 [2024-10-30T11:19:29.019Z] Total : 15668.22 61.20 0.00 0.00 0.00 0.00 0.00 00:07:56.338 00:07:57.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.271 Nvme0n1 : 10.00 15708.10 61.36 0.00 0.00 0.00 0.00 0.00 00:07:57.271 [2024-10-30T11:19:29.952Z] =================================================================================================================== 00:07:57.271 [2024-10-30T11:19:29.952Z] Total : 15708.10 61.36 0.00 0.00 0.00 0.00 0.00 00:07:57.271 00:07:57.271 00:07:57.271 Latency(us) 00:07:57.271 [2024-10-30T11:19:29.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.271 Nvme0n1 : 10.01 15706.29 61.35 0.00 0.00 8144.00 4296.25 16990.81 00:07:57.271 [2024-10-30T11:19:29.952Z] =================================================================================================================== 00:07:57.271 [2024-10-30T11:19:29.952Z] Total : 15706.29 61.35 0.00 0.00 8144.00 4296.25 16990.81 00:07:57.271 { 00:07:57.271 "results": [ 00:07:57.271 { 00:07:57.271 "job": "Nvme0n1", 00:07:57.271 "core_mask": "0x2", 00:07:57.271 "workload": "randwrite", 00:07:57.271 "status": "finished", 00:07:57.271 "queue_depth": 128, 00:07:57.271 "io_size": 4096, 00:07:57.271 "runtime": 10.005288, 00:07:57.271 "iops": 15706.294511462338, 00:07:57.271 "mibps": 61.35271293539976, 00:07:57.271 "io_failed": 0, 00:07:57.271 "io_timeout": 0, 00:07:57.271 "avg_latency_us": 8144.00167642169, 00:07:57.271 "min_latency_us": 4296.248888888889, 00:07:57.271 "max_latency_us": 16990.814814814814 00:07:57.271 } 00:07:57.271 ], 00:07:57.271 "core_count": 1 00:07:57.271 } 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 517302 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 517302 ']' 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 517302 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 517302 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 517302' 00:07:57.528 killing process with pid 517302 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 517302 00:07:57.528 Received shutdown signal, test time was about 10.000000 seconds 00:07:57.528 00:07:57.528 Latency(us) 00:07:57.528 [2024-10-30T11:19:30.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.528 [2024-10-30T11:19:30.209Z] =================================================================================================================== 00:07:57.528 [2024-10-30T11:19:30.209Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:57.528 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 517302 00:07:57.786 12:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:58.044 12:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:58.302 12:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:07:58.302 12:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:58.561 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:58.561 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:58.561 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:58.819 [2024-10-30 12:19:31.296061] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:58.819 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:07:59.077 request: 00:07:59.077 { 00:07:59.077 "uuid": "57c928da-ef51-475f-85a9-cf9b86c7e60d", 00:07:59.077 "method": "bdev_lvol_get_lvstores", 00:07:59.077 "req_id": 1 00:07:59.077 } 00:07:59.077 Got JSON-RPC error response 00:07:59.077 response: 00:07:59.077 { 00:07:59.077 "code": -19, 00:07:59.077 "message": "No such device" 00:07:59.077 } 00:07:59.077 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:59.077 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.077 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.077 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.077 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:59.336 aio_bdev 00:07:59.336 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b6ea5e95-6b65-4356-b0af-bff345bedaaf 00:07:59.336 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=b6ea5e95-6b65-4356-b0af-bff345bedaaf 00:07:59.336 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:59.336 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:59.336 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:59.336 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:59.336 12:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:59.594 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b6ea5e95-6b65-4356-b0af-bff345bedaaf -t 2000 00:07:59.852 [ 00:07:59.852 { 00:07:59.852 "name": "b6ea5e95-6b65-4356-b0af-bff345bedaaf", 00:07:59.852 "aliases": [ 00:07:59.852 "lvs/lvol" 00:07:59.852 ], 00:07:59.852 "product_name": "Logical Volume", 00:07:59.852 "block_size": 4096, 00:07:59.852 "num_blocks": 38912, 00:07:59.852 "uuid": "b6ea5e95-6b65-4356-b0af-bff345bedaaf", 00:07:59.852 "assigned_rate_limits": { 00:07:59.852 "rw_ios_per_sec": 0, 00:07:59.852 "rw_mbytes_per_sec": 0, 00:07:59.852 "r_mbytes_per_sec": 0, 00:07:59.852 "w_mbytes_per_sec": 0 00:07:59.852 }, 00:07:59.852 "claimed": false, 00:07:59.852 "zoned": false, 00:07:59.852 "supported_io_types": { 00:07:59.852 "read": true, 00:07:59.852 "write": true, 00:07:59.852 "unmap": true, 00:07:59.852 "flush": false, 00:07:59.852 "reset": true, 00:07:59.852 "nvme_admin": false, 00:07:59.852 "nvme_io": false, 00:07:59.852 "nvme_io_md": false, 00:07:59.852 "write_zeroes": true, 00:07:59.852 "zcopy": false, 00:07:59.852 "get_zone_info": false, 00:07:59.852 "zone_management": false, 00:07:59.852 "zone_append": false, 00:07:59.852 "compare": false, 00:07:59.852 "compare_and_write": false, 00:07:59.852 "abort": false, 00:07:59.852 "seek_hole": true, 00:07:59.852 "seek_data": true, 00:07:59.852 "copy": false, 00:07:59.852 "nvme_iov_md": false 00:07:59.852 }, 00:07:59.852 "driver_specific": { 00:07:59.852 "lvol": { 00:07:59.852 "lvol_store_uuid": "57c928da-ef51-475f-85a9-cf9b86c7e60d", 00:07:59.852 "base_bdev": "aio_bdev", 00:07:59.852 "thin_provision": false, 00:07:59.852 "num_allocated_clusters": 38, 00:07:59.852 "snapshot": false, 00:07:59.852 "clone": false, 00:07:59.852 "esnap_clone": false 00:07:59.852 } 00:07:59.852 } 00:07:59.852 } 00:07:59.852 ] 00:07:59.852 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:59.852 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:07:59.852 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:00.111 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:00.111 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:08:00.111 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:00.369 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:00.369 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b6ea5e95-6b65-4356-b0af-bff345bedaaf 00:08:00.627 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57c928da-ef51-475f-85a9-cf9b86c7e60d 00:08:00.886 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:01.145 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.145 00:08:01.145 real 0m17.727s 00:08:01.145 user 0m17.357s 00:08:01.145 sys 0m1.737s 00:08:01.145 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.145 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:01.145 ************************************ 00:08:01.145 END TEST lvs_grow_clean 00:08:01.145 ************************************ 00:08:01.145 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:01.145 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:01.145 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.145 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.403 ************************************ 00:08:01.403 START TEST lvs_grow_dirty 00:08:01.403 ************************************ 00:08:01.403 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:08:01.403 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:01.403 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:01.403 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:01.403 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:01.403 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:01.403 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:01.403 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.403 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.403 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.661 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:01.661 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:01.920 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=23e319a2-2430-4f56-beac-2cb156156d2b 00:08:01.920 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:01.920 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:02.178 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:02.178 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:02.178 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 23e319a2-2430-4f56-beac-2cb156156d2b lvol 150 00:08:02.436 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a 00:08:02.436 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.436 12:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:02.695 [2024-10-30 12:19:35.211659] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:02.695 [2024-10-30 12:19:35.211742] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:02.695 true 00:08:02.695 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:02.695 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:02.952 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:02.952 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:03.210 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a 00:08:03.469 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:03.727 [2024-10-30 12:19:36.290899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.727 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.986 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=519381 00:08:03.986 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:03.986 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:03.986 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 519381 /var/tmp/bdevperf.sock 00:08:03.986 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 519381 ']' 00:08:03.986 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:03.986 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.986 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:03.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:03.986 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.986 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.986 [2024-10-30 12:19:36.617176] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:03.986 [2024-10-30 12:19:36.617269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519381 ] 00:08:04.245 [2024-10-30 12:19:36.682020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.245 [2024-10-30 12:19:36.738695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.245 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:04.245 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:04.245 12:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:04.811 Nvme0n1 00:08:04.811 12:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:05.069 [ 00:08:05.069 { 00:08:05.069 "name": "Nvme0n1", 00:08:05.069 "aliases": [ 00:08:05.069 "6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a" 00:08:05.069 ], 00:08:05.069 "product_name": "NVMe disk", 00:08:05.069 "block_size": 4096, 00:08:05.069 "num_blocks": 38912, 00:08:05.069 "uuid": "6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a", 00:08:05.069 "numa_id": 0, 00:08:05.069 "assigned_rate_limits": { 00:08:05.069 "rw_ios_per_sec": 0, 00:08:05.069 "rw_mbytes_per_sec": 0, 00:08:05.069 "r_mbytes_per_sec": 0, 00:08:05.069 "w_mbytes_per_sec": 0 00:08:05.069 }, 00:08:05.069 "claimed": false, 00:08:05.069 "zoned": false, 00:08:05.069 "supported_io_types": { 00:08:05.069 "read": true, 00:08:05.069 "write": true, 00:08:05.069 "unmap": true, 00:08:05.069 "flush": true, 00:08:05.069 "reset": true, 00:08:05.069 "nvme_admin": true, 00:08:05.069 "nvme_io": true, 00:08:05.069 "nvme_io_md": false, 00:08:05.069 "write_zeroes": true, 00:08:05.069 "zcopy": false, 00:08:05.069 "get_zone_info": false, 00:08:05.069 "zone_management": false, 00:08:05.069 "zone_append": false, 00:08:05.069 "compare": true, 00:08:05.069 "compare_and_write": true, 00:08:05.069 "abort": true, 00:08:05.069 "seek_hole": false, 00:08:05.069 "seek_data": false, 00:08:05.069 "copy": true, 00:08:05.069 "nvme_iov_md": false 00:08:05.069 }, 00:08:05.069 "memory_domains": [ 00:08:05.069 { 00:08:05.069 "dma_device_id": "system", 00:08:05.069 "dma_device_type": 1 00:08:05.069 } 00:08:05.069 ], 00:08:05.069 "driver_specific": { 00:08:05.069 "nvme": [ 00:08:05.069 { 00:08:05.069 "trid": { 00:08:05.069 "trtype": "TCP", 00:08:05.069 "adrfam": "IPv4", 00:08:05.069 "traddr": "10.0.0.2", 00:08:05.069 "trsvcid": "4420", 00:08:05.069 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:05.069 }, 00:08:05.069 "ctrlr_data": { 00:08:05.069 "cntlid": 1, 00:08:05.069 "vendor_id": "0x8086", 00:08:05.069 "model_number": "SPDK bdev Controller", 00:08:05.069 "serial_number": "SPDK0", 00:08:05.069 "firmware_revision": "25.01", 00:08:05.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:05.069 "oacs": { 00:08:05.069 "security": 0, 00:08:05.069 "format": 0, 00:08:05.069 "firmware": 0, 00:08:05.069 "ns_manage": 0 00:08:05.069 }, 00:08:05.069 "multi_ctrlr": true, 00:08:05.069 "ana_reporting": false 00:08:05.069 }, 00:08:05.069 "vs": { 00:08:05.069 "nvme_version": "1.3" 00:08:05.069 }, 00:08:05.069 "ns_data": { 00:08:05.069 "id": 1, 00:08:05.069 "can_share": true 00:08:05.069 } 00:08:05.069 } 00:08:05.069 ], 00:08:05.069 "mp_policy": "active_passive" 00:08:05.069 } 00:08:05.070 } 00:08:05.070 ] 00:08:05.070 12:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=519517 00:08:05.070 12:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:05.070 12:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:05.070 Running I/O for 10 seconds... 00:08:06.445 Latency(us) 00:08:06.445 [2024-10-30T11:19:39.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.445 Nvme0n1 : 1.00 15370.00 60.04 0.00 0.00 0.00 0.00 0.00 00:08:06.445 [2024-10-30T11:19:39.126Z] =================================================================================================================== 00:08:06.445 [2024-10-30T11:19:39.126Z] Total : 15370.00 60.04 0.00 0.00 0.00 0.00 0.00 00:08:06.445 00:08:07.021 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:07.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.279 Nvme0n1 : 2.00 15622.50 61.03 0.00 0.00 0.00 0.00 0.00 00:08:07.279 [2024-10-30T11:19:39.960Z] =================================================================================================================== 00:08:07.279 [2024-10-30T11:19:39.960Z] Total : 15622.50 61.03 0.00 0.00 0.00 0.00 0.00 00:08:07.279 00:08:07.279 true 00:08:07.279 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:07.279 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:07.537 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:07.537 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:07.537 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 519517 00:08:08.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.100 Nvme0n1 : 3.00 15622.00 61.02 0.00 0.00 0.00 0.00 0.00 00:08:08.100 [2024-10-30T11:19:40.781Z] =================================================================================================================== 00:08:08.100 [2024-10-30T11:19:40.781Z] Total : 15622.00 61.02 0.00 0.00 0.00 0.00 0.00 00:08:08.100 00:08:09.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.477 Nvme0n1 : 4.00 15719.25 61.40 0.00 0.00 0.00 0.00 0.00 00:08:09.477 [2024-10-30T11:19:42.158Z] =================================================================================================================== 00:08:09.477 [2024-10-30T11:19:42.158Z] Total : 15719.25 61.40 0.00 0.00 0.00 0.00 0.00 00:08:09.477 00:08:10.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.412 Nvme0n1 : 5.00 15790.60 61.68 0.00 0.00 0.00 0.00 0.00 00:08:10.412 [2024-10-30T11:19:43.093Z] =================================================================================================================== 00:08:10.412 [2024-10-30T11:19:43.093Z] Total : 15790.60 61.68 0.00 0.00 0.00 0.00 0.00 00:08:10.412 00:08:11.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.347 Nvme0n1 : 6.00 15837.67 61.87 0.00 0.00 0.00 0.00 0.00 00:08:11.347 [2024-10-30T11:19:44.028Z] =================================================================================================================== 00:08:11.347 [2024-10-30T11:19:44.028Z] Total : 15837.67 61.87 0.00 0.00 0.00 0.00 0.00 00:08:11.347 00:08:12.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.283 Nvme0n1 : 7.00 15906.57 62.14 0.00 0.00 0.00 0.00 0.00 00:08:12.283 [2024-10-30T11:19:44.964Z] =================================================================================================================== 00:08:12.283 [2024-10-30T11:19:44.964Z] Total : 15906.57 62.14 0.00 0.00 0.00 0.00 0.00 00:08:12.283 00:08:13.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.217 Nvme0n1 : 8.00 15958.12 62.34 0.00 0.00 0.00 0.00 0.00 00:08:13.217 [2024-10-30T11:19:45.898Z] =================================================================================================================== 00:08:13.217 [2024-10-30T11:19:45.898Z] Total : 15958.12 62.34 0.00 0.00 0.00 0.00 0.00 00:08:13.217 00:08:14.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.152 Nvme0n1 : 9.00 15992.89 62.47 0.00 0.00 0.00 0.00 0.00 00:08:14.152 [2024-10-30T11:19:46.833Z] =================================================================================================================== 00:08:14.152 [2024-10-30T11:19:46.833Z] Total : 15992.89 62.47 0.00 0.00 0.00 0.00 0.00 00:08:14.152 00:08:15.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.089 Nvme0n1 : 10.00 16020.00 62.58 0.00 0.00 0.00 0.00 0.00 00:08:15.089 [2024-10-30T11:19:47.770Z] =================================================================================================================== 00:08:15.089 [2024-10-30T11:19:47.770Z] Total : 16020.00 62.58 0.00 0.00 0.00 0.00 0.00 00:08:15.089 00:08:15.089 00:08:15.089 Latency(us) 00:08:15.089 [2024-10-30T11:19:47.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.089 Nvme0n1 : 10.00 16018.83 62.57 0.00 0.00 7985.70 4611.79 17670.45 00:08:15.089 [2024-10-30T11:19:47.770Z] =================================================================================================================== 00:08:15.089 [2024-10-30T11:19:47.770Z] Total : 16018.83 62.57 0.00 0.00 7985.70 4611.79 17670.45 00:08:15.089 { 00:08:15.089 "results": [ 00:08:15.089 { 00:08:15.089 "job": "Nvme0n1", 00:08:15.089 "core_mask": "0x2", 00:08:15.089 "workload": "randwrite", 00:08:15.089 "status": "finished", 00:08:15.089 "queue_depth": 128, 00:08:15.089 "io_size": 4096, 00:08:15.089 "runtime": 10.004724, 00:08:15.089 "iops": 16018.832703430899, 00:08:15.089 "mibps": 62.57356524777695, 00:08:15.089 "io_failed": 0, 00:08:15.089 "io_timeout": 0, 00:08:15.089 "avg_latency_us": 7985.69704816682, 00:08:15.089 "min_latency_us": 4611.792592592593, 00:08:15.089 "max_latency_us": 17670.447407407406 00:08:15.089 } 00:08:15.089 ], 00:08:15.089 "core_count": 1 00:08:15.089 } 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 519381 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 519381 ']' 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 519381 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 519381 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 519381' 00:08:15.347 killing process with pid 519381 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 519381 00:08:15.347 Received shutdown signal, test time was about 10.000000 seconds 00:08:15.347 00:08:15.347 Latency(us) 00:08:15.347 [2024-10-30T11:19:48.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.347 [2024-10-30T11:19:48.028Z] =================================================================================================================== 00:08:15.347 [2024-10-30T11:19:48.028Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:15.347 12:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 519381 00:08:15.605 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.864 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.122 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:16.122 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 516862 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 516862 00:08:16.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 516862 Killed "${NVMF_APP[@]}" "$@" 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=520850 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 520850 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 520850 ']' 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.380 12:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.380 [2024-10-30 12:19:48.934771] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:16.380 [2024-10-30 12:19:48.934846] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.381 [2024-10-30 12:19:49.006175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.381 [2024-10-30 12:19:49.060342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.381 [2024-10-30 12:19:49.060420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.381 [2024-10-30 12:19:49.060443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.381 [2024-10-30 12:19:49.060470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.381 [2024-10-30 12:19:49.060480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.381 [2024-10-30 12:19:49.061085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.639 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.639 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:16.639 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.639 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.639 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.639 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.639 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:16.898 [2024-10-30 12:19:49.454533] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:16.898 [2024-10-30 12:19:49.454700] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:16.898 [2024-10-30 12:19:49.454746] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:16.898 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:16.898 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a 00:08:16.898 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a 00:08:16.898 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:16.898 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:16.898 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:16.898 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:16.898 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:17.159 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a -t 2000 00:08:17.418 [ 00:08:17.418 { 00:08:17.418 "name": "6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a", 00:08:17.418 "aliases": [ 00:08:17.418 "lvs/lvol" 00:08:17.418 ], 00:08:17.418 "product_name": "Logical Volume", 00:08:17.418 "block_size": 4096, 00:08:17.418 "num_blocks": 38912, 00:08:17.418 "uuid": "6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a", 00:08:17.418 "assigned_rate_limits": { 00:08:17.418 "rw_ios_per_sec": 0, 00:08:17.418 "rw_mbytes_per_sec": 0, 00:08:17.418 "r_mbytes_per_sec": 0, 00:08:17.418 "w_mbytes_per_sec": 0 00:08:17.418 }, 00:08:17.418 "claimed": false, 00:08:17.418 "zoned": false, 00:08:17.418 "supported_io_types": { 00:08:17.418 "read": true, 00:08:17.418 "write": true, 00:08:17.418 "unmap": true, 00:08:17.418 "flush": false, 00:08:17.418 "reset": true, 00:08:17.418 "nvme_admin": false, 00:08:17.418 "nvme_io": false, 00:08:17.418 "nvme_io_md": false, 00:08:17.418 "write_zeroes": true, 00:08:17.418 "zcopy": false, 00:08:17.418 "get_zone_info": false, 00:08:17.418 "zone_management": false, 00:08:17.418 "zone_append": false, 00:08:17.418 "compare": false, 00:08:17.418 "compare_and_write": false, 00:08:17.418 "abort": false, 00:08:17.418 "seek_hole": true, 00:08:17.418 "seek_data": true, 00:08:17.418 "copy": false, 00:08:17.418 "nvme_iov_md": false 00:08:17.418 }, 00:08:17.418 "driver_specific": { 00:08:17.418 "lvol": { 00:08:17.419 "lvol_store_uuid": "23e319a2-2430-4f56-beac-2cb156156d2b", 00:08:17.419 "base_bdev": "aio_bdev", 00:08:17.419 "thin_provision": false, 00:08:17.419 "num_allocated_clusters": 38, 00:08:17.419 "snapshot": false, 00:08:17.419 "clone": false, 00:08:17.419 "esnap_clone": false 00:08:17.419 } 00:08:17.419 } 00:08:17.419 } 00:08:17.419 ] 00:08:17.419 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:17.419 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:17.419 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:17.677 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:17.677 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:17.677 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:17.935 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:17.935 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.193 [2024-10-30 12:19:50.840142] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:18.193 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:18.193 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:18.193 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:18.193 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.193 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.193 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.450 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.450 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.450 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.450 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.450 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:18.450 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:18.450 request: 00:08:18.450 { 00:08:18.450 "uuid": "23e319a2-2430-4f56-beac-2cb156156d2b", 00:08:18.450 "method": "bdev_lvol_get_lvstores", 00:08:18.450 "req_id": 1 00:08:18.450 } 00:08:18.450 Got JSON-RPC error response 00:08:18.450 response: 00:08:18.450 { 00:08:18.450 "code": -19, 00:08:18.450 "message": "No such device" 00:08:18.450 } 00:08:18.708 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:18.708 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.708 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.709 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.709 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.967 aio_bdev 00:08:18.967 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a 00:08:18.967 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a 00:08:18.967 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:18.967 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:18.967 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:18.967 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:18.967 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:19.225 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a -t 2000 00:08:19.484 [ 00:08:19.484 { 00:08:19.484 "name": "6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a", 00:08:19.484 "aliases": [ 00:08:19.484 "lvs/lvol" 00:08:19.484 ], 00:08:19.484 "product_name": "Logical Volume", 00:08:19.484 "block_size": 4096, 00:08:19.484 "num_blocks": 38912, 00:08:19.484 "uuid": "6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a", 00:08:19.484 "assigned_rate_limits": { 00:08:19.484 "rw_ios_per_sec": 0, 00:08:19.484 "rw_mbytes_per_sec": 0, 00:08:19.484 "r_mbytes_per_sec": 0, 00:08:19.484 "w_mbytes_per_sec": 0 00:08:19.484 }, 00:08:19.484 "claimed": false, 00:08:19.484 "zoned": false, 00:08:19.484 "supported_io_types": { 00:08:19.484 "read": true, 00:08:19.484 "write": true, 00:08:19.484 "unmap": true, 00:08:19.484 "flush": false, 00:08:19.484 "reset": true, 00:08:19.484 "nvme_admin": false, 00:08:19.484 "nvme_io": false, 00:08:19.484 "nvme_io_md": false, 00:08:19.484 "write_zeroes": true, 00:08:19.484 "zcopy": false, 00:08:19.484 "get_zone_info": false, 00:08:19.484 "zone_management": false, 00:08:19.484 "zone_append": false, 00:08:19.484 "compare": false, 00:08:19.484 "compare_and_write": false, 00:08:19.484 "abort": false, 00:08:19.484 "seek_hole": true, 00:08:19.484 "seek_data": true, 00:08:19.484 "copy": false, 00:08:19.484 "nvme_iov_md": false 00:08:19.484 }, 00:08:19.484 "driver_specific": { 00:08:19.484 "lvol": { 00:08:19.484 "lvol_store_uuid": "23e319a2-2430-4f56-beac-2cb156156d2b", 00:08:19.484 "base_bdev": "aio_bdev", 00:08:19.484 "thin_provision": false, 00:08:19.484 "num_allocated_clusters": 38, 00:08:19.484 "snapshot": false, 00:08:19.484 "clone": false, 00:08:19.484 "esnap_clone": false 00:08:19.484 } 00:08:19.484 } 00:08:19.484 } 00:08:19.484 ] 00:08:19.484 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:19.484 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:19.484 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:19.742 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:19.742 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:19.742 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:20.000 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:20.000 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6cdbe0f8-3016-41e0-a4b8-da5ec1e58b0a 00:08:20.258 12:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23e319a2-2430-4f56-beac-2cb156156d2b 00:08:20.516 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.774 00:08:20.774 real 0m19.505s 00:08:20.774 user 0m48.774s 00:08:20.774 sys 0m4.818s 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.774 ************************************ 00:08:20.774 END TEST lvs_grow_dirty 00:08:20.774 ************************************ 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:20.774 nvmf_trace.0 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.774 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.774 rmmod nvme_tcp 00:08:20.774 rmmod nvme_fabrics 00:08:20.774 rmmod nvme_keyring 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 520850 ']' 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 520850 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 520850 ']' 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 520850 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 520850 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 520850' 00:08:21.033 killing process with pid 520850 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 520850 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 520850 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.033 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.293 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.293 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.293 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.293 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.293 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.202 00:08:23.202 real 0m42.647s 00:08:23.202 user 1m12.147s 00:08:23.202 sys 0m8.473s 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.202 ************************************ 00:08:23.202 END TEST nvmf_lvs_grow 00:08:23.202 ************************************ 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.202 ************************************ 00:08:23.202 START TEST nvmf_bdev_io_wait 00:08:23.202 ************************************ 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:23.202 * Looking for test storage... 00:08:23.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:23.202 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:23.460 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:23.460 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.460 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.460 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:23.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.461 --rc genhtml_branch_coverage=1 00:08:23.461 --rc genhtml_function_coverage=1 00:08:23.461 --rc genhtml_legend=1 00:08:23.461 --rc geninfo_all_blocks=1 00:08:23.461 --rc geninfo_unexecuted_blocks=1 00:08:23.461 00:08:23.461 ' 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:23.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.461 --rc genhtml_branch_coverage=1 00:08:23.461 --rc genhtml_function_coverage=1 00:08:23.461 --rc genhtml_legend=1 00:08:23.461 --rc geninfo_all_blocks=1 00:08:23.461 --rc geninfo_unexecuted_blocks=1 00:08:23.461 00:08:23.461 ' 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:23.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.461 --rc genhtml_branch_coverage=1 00:08:23.461 --rc genhtml_function_coverage=1 00:08:23.461 --rc genhtml_legend=1 00:08:23.461 --rc geninfo_all_blocks=1 00:08:23.461 --rc geninfo_unexecuted_blocks=1 00:08:23.461 00:08:23.461 ' 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:23.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.461 --rc genhtml_branch_coverage=1 00:08:23.461 --rc genhtml_function_coverage=1 00:08:23.461 --rc genhtml_legend=1 00:08:23.461 --rc geninfo_all_blocks=1 00:08:23.461 --rc geninfo_unexecuted_blocks=1 00:08:23.461 00:08:23.461 ' 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.461 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:23.462 12:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.053 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.053 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:26.053 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:26.053 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:26.053 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:26.053 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:26.053 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:26.053 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:26.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:26.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:26.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:26.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:26.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:08:26.054 00:08:26.054 --- 10.0.0.2 ping statistics --- 00:08:26.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.054 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:08:26.054 00:08:26.054 --- 10.0.0.1 ping statistics --- 00:08:26.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.054 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.054 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=523398 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 523398 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 523398 ']' 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.055 [2024-10-30 12:19:58.410759] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:26.055 [2024-10-30 12:19:58.410861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.055 [2024-10-30 12:19:58.486674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.055 [2024-10-30 12:19:58.547900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.055 [2024-10-30 12:19:58.547966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.055 [2024-10-30 12:19:58.547993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.055 [2024-10-30 12:19:58.548004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.055 [2024-10-30 12:19:58.548013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.055 [2024-10-30 12:19:58.549537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.055 [2024-10-30 12:19:58.549577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.055 [2024-10-30 12:19:58.549668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.055 [2024-10-30 12:19:58.549672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.055 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.314 [2024-10-30 12:19:58.754592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.314 Malloc0 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.314 [2024-10-30 12:19:58.807800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=523542 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=523543 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=523546 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:26.314 { 00:08:26.314 "params": { 00:08:26.314 "name": "Nvme$subsystem", 00:08:26.314 "trtype": "$TEST_TRANSPORT", 00:08:26.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.314 "adrfam": "ipv4", 00:08:26.314 "trsvcid": "$NVMF_PORT", 00:08:26.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.314 "hdgst": ${hdgst:-false}, 00:08:26.314 "ddgst": ${ddgst:-false} 00:08:26.314 }, 00:08:26.314 "method": "bdev_nvme_attach_controller" 00:08:26.314 } 00:08:26.314 EOF 00:08:26.314 )") 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:26.314 { 00:08:26.314 "params": { 00:08:26.314 "name": "Nvme$subsystem", 00:08:26.314 "trtype": "$TEST_TRANSPORT", 00:08:26.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.314 "adrfam": "ipv4", 00:08:26.314 "trsvcid": "$NVMF_PORT", 00:08:26.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.314 "hdgst": ${hdgst:-false}, 00:08:26.314 "ddgst": ${ddgst:-false} 00:08:26.314 }, 00:08:26.314 "method": "bdev_nvme_attach_controller" 00:08:26.314 } 00:08:26.314 EOF 00:08:26.314 )") 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=523548 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:26.314 { 00:08:26.314 "params": { 00:08:26.314 "name": "Nvme$subsystem", 00:08:26.314 "trtype": "$TEST_TRANSPORT", 00:08:26.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.314 "adrfam": "ipv4", 00:08:26.314 "trsvcid": "$NVMF_PORT", 00:08:26.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.314 "hdgst": ${hdgst:-false}, 00:08:26.314 "ddgst": ${ddgst:-false} 00:08:26.314 }, 00:08:26.314 "method": "bdev_nvme_attach_controller" 00:08:26.314 } 00:08:26.314 EOF 00:08:26.314 )") 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:26.314 { 00:08:26.314 "params": { 00:08:26.314 "name": "Nvme$subsystem", 00:08:26.314 "trtype": "$TEST_TRANSPORT", 00:08:26.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.314 "adrfam": "ipv4", 00:08:26.314 "trsvcid": "$NVMF_PORT", 00:08:26.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.314 "hdgst": ${hdgst:-false}, 00:08:26.314 "ddgst": ${ddgst:-false} 00:08:26.314 }, 00:08:26.314 "method": "bdev_nvme_attach_controller" 00:08:26.314 } 00:08:26.314 EOF 00:08:26.314 )") 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 523542 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:26.314 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:26.315 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:26.315 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:26.315 "params": { 00:08:26.315 "name": "Nvme1", 00:08:26.315 "trtype": "tcp", 00:08:26.315 "traddr": "10.0.0.2", 00:08:26.315 "adrfam": "ipv4", 00:08:26.315 "trsvcid": "4420", 00:08:26.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.315 "hdgst": false, 00:08:26.315 "ddgst": false 00:08:26.315 }, 00:08:26.315 "method": "bdev_nvme_attach_controller" 00:08:26.315 }' 00:08:26.315 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:26.315 "params": { 00:08:26.315 "name": "Nvme1", 00:08:26.315 "trtype": "tcp", 00:08:26.315 "traddr": "10.0.0.2", 00:08:26.315 "adrfam": "ipv4", 00:08:26.315 "trsvcid": "4420", 00:08:26.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.315 "hdgst": false, 00:08:26.315 "ddgst": false 00:08:26.315 }, 00:08:26.315 "method": "bdev_nvme_attach_controller" 00:08:26.315 }' 00:08:26.315 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:26.315 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:26.315 "params": { 00:08:26.315 "name": "Nvme1", 00:08:26.315 "trtype": "tcp", 00:08:26.315 "traddr": "10.0.0.2", 00:08:26.315 "adrfam": "ipv4", 00:08:26.315 "trsvcid": "4420", 00:08:26.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.315 "hdgst": false, 00:08:26.315 "ddgst": false 00:08:26.315 }, 00:08:26.315 "method": "bdev_nvme_attach_controller" 00:08:26.315 }' 00:08:26.315 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:26.315 12:19:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:26.315 "params": { 00:08:26.315 "name": "Nvme1", 00:08:26.315 "trtype": "tcp", 00:08:26.315 "traddr": "10.0.0.2", 00:08:26.315 "adrfam": "ipv4", 00:08:26.315 "trsvcid": "4420", 00:08:26.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.315 "hdgst": false, 00:08:26.315 "ddgst": false 00:08:26.315 }, 00:08:26.315 "method": "bdev_nvme_attach_controller" 00:08:26.315 }' 00:08:26.315 [2024-10-30 12:19:58.859192] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:26.315 [2024-10-30 12:19:58.859192] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:26.315 [2024-10-30 12:19:58.859192] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:26.315 [2024-10-30 12:19:58.859310] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-30 12:19:58.859311] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-30 12:19:58.859310] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:26.315 --proc-type=auto ] 00:08:26.315 --proc-type=auto ] 00:08:26.315 [2024-10-30 12:19:58.859419] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:26.315 [2024-10-30 12:19:58.859487] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:26.572 [2024-10-30 12:19:59.041121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.572 [2024-10-30 12:19:59.094771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:26.572 [2024-10-30 12:19:59.141984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.572 [2024-10-30 12:19:59.196057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:26.572 [2024-10-30 12:19:59.241840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.831 [2024-10-30 12:19:59.297861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:26.831 [2024-10-30 12:19:59.315200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.831 [2024-10-30 12:19:59.366433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:26.831 Running I/O for 1 seconds... 00:08:26.831 Running I/O for 1 seconds... 00:08:27.088 Running I/O for 1 seconds... 00:08:27.088 Running I/O for 1 seconds... 00:08:28.048 6739.00 IOPS, 26.32 MiB/s 00:08:28.048 Latency(us) 00:08:28.048 [2024-10-30T11:20:00.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.048 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:28.048 Nvme1n1 : 1.02 6709.44 26.21 0.00 0.00 18829.46 7961.41 29515.47 00:08:28.048 [2024-10-30T11:20:00.729Z] =================================================================================================================== 00:08:28.048 [2024-10-30T11:20:00.729Z] Total : 6709.44 26.21 0.00 0.00 18829.46 7961.41 29515.47 00:08:28.048 190640.00 IOPS, 744.69 MiB/s 00:08:28.048 Latency(us) 00:08:28.048 [2024-10-30T11:20:00.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.048 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:28.048 Nvme1n1 : 1.00 190270.88 743.25 0.00 0.00 669.09 309.48 1929.67 00:08:28.048 [2024-10-30T11:20:00.729Z] =================================================================================================================== 00:08:28.048 [2024-10-30T11:20:00.729Z] Total : 190270.88 743.25 0.00 0.00 669.09 309.48 1929.67 00:08:28.048 6551.00 IOPS, 25.59 MiB/s 00:08:28.048 Latency(us) 00:08:28.048 [2024-10-30T11:20:00.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.048 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:28.048 Nvme1n1 : 1.01 6654.90 26.00 0.00 0.00 19177.12 4247.70 38836.15 00:08:28.048 [2024-10-30T11:20:00.729Z] =================================================================================================================== 00:08:28.048 [2024-10-30T11:20:00.729Z] Total : 6654.90 26.00 0.00 0.00 19177.12 4247.70 38836.15 00:08:28.048 8767.00 IOPS, 34.25 MiB/s 00:08:28.048 Latency(us) 00:08:28.048 [2024-10-30T11:20:00.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.048 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:28.048 Nvme1n1 : 1.01 8820.30 34.45 0.00 0.00 14446.14 7233.23 25631.86 00:08:28.048 [2024-10-30T11:20:00.729Z] =================================================================================================================== 00:08:28.048 [2024-10-30T11:20:00.729Z] Total : 8820.30 34.45 0.00 0.00 14446.14 7233.23 25631.86 00:08:28.048 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 523543 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 523546 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 523548 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.305 rmmod nvme_tcp 00:08:28.305 rmmod nvme_fabrics 00:08:28.305 rmmod nvme_keyring 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 523398 ']' 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 523398 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 523398 ']' 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 523398 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 523398 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 523398' 00:08:28.305 killing process with pid 523398 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 523398 00:08:28.305 12:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 523398 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.563 12:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.463 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.723 00:08:30.723 real 0m7.337s 00:08:30.723 user 0m16.030s 00:08:30.723 sys 0m3.564s 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.723 ************************************ 00:08:30.723 END TEST nvmf_bdev_io_wait 00:08:30.723 ************************************ 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.723 ************************************ 00:08:30.723 START TEST nvmf_queue_depth 00:08:30.723 ************************************ 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:30.723 * Looking for test storage... 00:08:30.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:30.723 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:30.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.724 --rc genhtml_branch_coverage=1 00:08:30.724 --rc genhtml_function_coverage=1 00:08:30.724 --rc genhtml_legend=1 00:08:30.724 --rc geninfo_all_blocks=1 00:08:30.724 --rc geninfo_unexecuted_blocks=1 00:08:30.724 00:08:30.724 ' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:30.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.724 --rc genhtml_branch_coverage=1 00:08:30.724 --rc genhtml_function_coverage=1 00:08:30.724 --rc genhtml_legend=1 00:08:30.724 --rc geninfo_all_blocks=1 00:08:30.724 --rc geninfo_unexecuted_blocks=1 00:08:30.724 00:08:30.724 ' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:30.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.724 --rc genhtml_branch_coverage=1 00:08:30.724 --rc genhtml_function_coverage=1 00:08:30.724 --rc genhtml_legend=1 00:08:30.724 --rc geninfo_all_blocks=1 00:08:30.724 --rc geninfo_unexecuted_blocks=1 00:08:30.724 00:08:30.724 ' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:30.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.724 --rc genhtml_branch_coverage=1 00:08:30.724 --rc genhtml_function_coverage=1 00:08:30.724 --rc genhtml_legend=1 00:08:30.724 --rc geninfo_all_blocks=1 00:08:30.724 --rc geninfo_unexecuted_blocks=1 00:08:30.724 00:08:30.724 ' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.724 12:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:33.260 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:33.260 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:33.260 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:33.260 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:08:33.260 00:08:33.260 --- 10.0.0.2 ping statistics --- 00:08:33.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.260 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:08:33.260 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:08:33.260 00:08:33.260 --- 10.0.0.1 ping statistics --- 00:08:33.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.260 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=525776 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 525776 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 525776 ']' 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.261 [2024-10-30 12:20:05.618660] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:33.261 [2024-10-30 12:20:05.618746] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.261 [2024-10-30 12:20:05.695034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.261 [2024-10-30 12:20:05.753899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.261 [2024-10-30 12:20:05.753959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.261 [2024-10-30 12:20:05.753972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.261 [2024-10-30 12:20:05.753984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.261 [2024-10-30 12:20:05.753993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.261 [2024-10-30 12:20:05.754611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.261 [2024-10-30 12:20:05.902011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.261 Malloc0 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.261 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.519 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.519 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.520 [2024-10-30 12:20:05.950338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=525801 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 525801 /var/tmp/bdevperf.sock 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 525801 ']' 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.520 12:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.520 [2024-10-30 12:20:05.996869] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:33.520 [2024-10-30 12:20:05.996939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525801 ] 00:08:33.520 [2024-10-30 12:20:06.063684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.520 [2024-10-30 12:20:06.121132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.778 12:20:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.778 12:20:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:33.778 12:20:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:33.778 12:20:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.778 12:20:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.778 NVMe0n1 00:08:33.778 12:20:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.778 12:20:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.778 Running I/O for 10 seconds... 00:08:36.091 8223.00 IOPS, 32.12 MiB/s [2024-10-30T11:20:09.709Z] 8628.00 IOPS, 33.70 MiB/s [2024-10-30T11:20:10.644Z] 8585.00 IOPS, 33.54 MiB/s [2024-10-30T11:20:11.580Z] 8694.00 IOPS, 33.96 MiB/s [2024-10-30T11:20:12.515Z] 8688.20 IOPS, 33.94 MiB/s [2024-10-30T11:20:13.473Z] 8700.00 IOPS, 33.98 MiB/s [2024-10-30T11:20:14.846Z] 8758.43 IOPS, 34.21 MiB/s [2024-10-30T11:20:15.779Z] 8759.75 IOPS, 34.22 MiB/s [2024-10-30T11:20:16.712Z] 8762.78 IOPS, 34.23 MiB/s [2024-10-30T11:20:16.712Z] 8790.10 IOPS, 34.34 MiB/s 00:08:44.031 Latency(us) 00:08:44.031 [2024-10-30T11:20:16.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.031 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:44.031 Verification LBA range: start 0x0 length 0x4000 00:08:44.031 NVMe0n1 : 10.13 8778.86 34.29 0.00 0.00 115724.95 22622.06 79225.74 00:08:44.031 [2024-10-30T11:20:16.712Z] =================================================================================================================== 00:08:44.031 [2024-10-30T11:20:16.712Z] Total : 8778.86 34.29 0.00 0.00 115724.95 22622.06 79225.74 00:08:44.031 { 00:08:44.031 "results": [ 00:08:44.031 { 00:08:44.031 "job": "NVMe0n1", 00:08:44.031 "core_mask": "0x1", 00:08:44.031 "workload": "verify", 00:08:44.031 "status": "finished", 00:08:44.031 "verify_range": { 00:08:44.031 "start": 0, 00:08:44.031 "length": 16384 00:08:44.031 }, 00:08:44.031 "queue_depth": 1024, 00:08:44.031 "io_size": 4096, 00:08:44.031 "runtime": 10.129452, 00:08:44.031 "iops": 8778.855953905502, 00:08:44.031 "mibps": 34.29240606994337, 00:08:44.032 "io_failed": 0, 00:08:44.032 "io_timeout": 0, 00:08:44.032 "avg_latency_us": 115724.94890270826, 00:08:44.032 "min_latency_us": 22622.056296296298, 00:08:44.032 "max_latency_us": 79225.74222222222 00:08:44.032 } 00:08:44.032 ], 00:08:44.032 "core_count": 1 00:08:44.032 } 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 525801 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 525801 ']' 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 525801 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 525801 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 525801' 00:08:44.032 killing process with pid 525801 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 525801 00:08:44.032 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.032 00:08:44.032 Latency(us) 00:08:44.032 [2024-10-30T11:20:16.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.032 [2024-10-30T11:20:16.713Z] =================================================================================================================== 00:08:44.032 [2024-10-30T11:20:16.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.032 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 525801 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.289 rmmod nvme_tcp 00:08:44.289 rmmod nvme_fabrics 00:08:44.289 rmmod nvme_keyring 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 525776 ']' 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 525776 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 525776 ']' 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 525776 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 525776 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 525776' 00:08:44.289 killing process with pid 525776 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 525776 00:08:44.289 12:20:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 525776 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.548 12:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.097 00:08:47.097 real 0m16.062s 00:08:47.097 user 0m22.553s 00:08:47.097 sys 0m3.108s 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.097 ************************************ 00:08:47.097 END TEST nvmf_queue_depth 00:08:47.097 ************************************ 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.097 ************************************ 00:08:47.097 START TEST nvmf_target_multipath 00:08:47.097 ************************************ 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:47.097 * Looking for test storage... 00:08:47.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:47.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.097 --rc genhtml_branch_coverage=1 00:08:47.097 --rc genhtml_function_coverage=1 00:08:47.097 --rc genhtml_legend=1 00:08:47.097 --rc geninfo_all_blocks=1 00:08:47.097 --rc geninfo_unexecuted_blocks=1 00:08:47.097 00:08:47.097 ' 00:08:47.097 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:47.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.097 --rc genhtml_branch_coverage=1 00:08:47.097 --rc genhtml_function_coverage=1 00:08:47.097 --rc genhtml_legend=1 00:08:47.097 --rc geninfo_all_blocks=1 00:08:47.098 --rc geninfo_unexecuted_blocks=1 00:08:47.098 00:08:47.098 ' 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:47.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.098 --rc genhtml_branch_coverage=1 00:08:47.098 --rc genhtml_function_coverage=1 00:08:47.098 --rc genhtml_legend=1 00:08:47.098 --rc geninfo_all_blocks=1 00:08:47.098 --rc geninfo_unexecuted_blocks=1 00:08:47.098 00:08:47.098 ' 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:47.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.098 --rc genhtml_branch_coverage=1 00:08:47.098 --rc genhtml_function_coverage=1 00:08:47.098 --rc genhtml_legend=1 00:08:47.098 --rc geninfo_all_blocks=1 00:08:47.098 --rc geninfo_unexecuted_blocks=1 00:08:47.098 00:08:47.098 ' 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.098 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.099 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.099 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.099 12:20:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:49.007 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:49.007 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:49.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:49.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.007 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.008 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:08:49.268 00:08:49.268 --- 10.0.0.2 ping statistics --- 00:08:49.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.268 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:08:49.268 00:08:49.268 --- 10.0.0.1 ping statistics --- 00:08:49.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.268 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:49.268 only one NIC for nvmf test 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.268 rmmod nvme_tcp 00:08:49.268 rmmod nvme_fabrics 00:08:49.268 rmmod nvme_keyring 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.268 12:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.807 00:08:51.807 real 0m4.646s 00:08:51.807 user 0m0.929s 00:08:51.807 sys 0m1.657s 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:51.807 ************************************ 00:08:51.807 END TEST nvmf_target_multipath 00:08:51.807 ************************************ 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.807 12:20:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.807 ************************************ 00:08:51.807 START TEST nvmf_zcopy 00:08:51.807 ************************************ 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.808 * Looking for test storage... 00:08:51.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:51.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.808 --rc genhtml_branch_coverage=1 00:08:51.808 --rc genhtml_function_coverage=1 00:08:51.808 --rc genhtml_legend=1 00:08:51.808 --rc geninfo_all_blocks=1 00:08:51.808 --rc geninfo_unexecuted_blocks=1 00:08:51.808 00:08:51.808 ' 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:51.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.808 --rc genhtml_branch_coverage=1 00:08:51.808 --rc genhtml_function_coverage=1 00:08:51.808 --rc genhtml_legend=1 00:08:51.808 --rc geninfo_all_blocks=1 00:08:51.808 --rc geninfo_unexecuted_blocks=1 00:08:51.808 00:08:51.808 ' 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:51.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.808 --rc genhtml_branch_coverage=1 00:08:51.808 --rc genhtml_function_coverage=1 00:08:51.808 --rc genhtml_legend=1 00:08:51.808 --rc geninfo_all_blocks=1 00:08:51.808 --rc geninfo_unexecuted_blocks=1 00:08:51.808 00:08:51.808 ' 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:51.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.808 --rc genhtml_branch_coverage=1 00:08:51.808 --rc genhtml_function_coverage=1 00:08:51.808 --rc genhtml_legend=1 00:08:51.808 --rc geninfo_all_blocks=1 00:08:51.808 --rc geninfo_unexecuted_blocks=1 00:08:51.808 00:08:51.808 ' 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.808 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.809 12:20:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.712 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:53.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:53.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:53.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:53.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.713 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:08:53.972 00:08:53.972 --- 10.0.0.2 ping statistics --- 00:08:53.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.972 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:53.972 00:08:53.972 --- 10.0.0.1 ping statistics --- 00:08:53.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.972 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=531028 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 531028 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 531028 ']' 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.972 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.972 [2024-10-30 12:20:26.512677] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:53.972 [2024-10-30 12:20:26.512763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.972 [2024-10-30 12:20:26.585697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.972 [2024-10-30 12:20:26.643932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.972 [2024-10-30 12:20:26.643988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.972 [2024-10-30 12:20:26.644001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.972 [2024-10-30 12:20:26.644011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.972 [2024-10-30 12:20:26.644021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.972 [2024-10-30 12:20:26.644602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.231 [2024-10-30 12:20:26.786487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.231 [2024-10-30 12:20:26.802724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.231 malloc0 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.231 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:54.232 { 00:08:54.232 "params": { 00:08:54.232 "name": "Nvme$subsystem", 00:08:54.232 "trtype": "$TEST_TRANSPORT", 00:08:54.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.232 "adrfam": "ipv4", 00:08:54.232 "trsvcid": "$NVMF_PORT", 00:08:54.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.232 "hdgst": ${hdgst:-false}, 00:08:54.232 "ddgst": ${ddgst:-false} 00:08:54.232 }, 00:08:54.232 "method": "bdev_nvme_attach_controller" 00:08:54.232 } 00:08:54.232 EOF 00:08:54.232 )") 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:54.232 12:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:54.232 "params": { 00:08:54.232 "name": "Nvme1", 00:08:54.232 "trtype": "tcp", 00:08:54.232 "traddr": "10.0.0.2", 00:08:54.232 "adrfam": "ipv4", 00:08:54.232 "trsvcid": "4420", 00:08:54.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.232 "hdgst": false, 00:08:54.232 "ddgst": false 00:08:54.232 }, 00:08:54.232 "method": "bdev_nvme_attach_controller" 00:08:54.232 }' 00:08:54.232 [2024-10-30 12:20:26.881014] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:08:54.232 [2024-10-30 12:20:26.881098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531054 ] 00:08:54.490 [2024-10-30 12:20:26.949650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.490 [2024-10-30 12:20:27.008669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.748 Running I/O for 10 seconds... 00:08:56.614 5740.00 IOPS, 44.84 MiB/s [2024-10-30T11:20:30.665Z] 5789.50 IOPS, 45.23 MiB/s [2024-10-30T11:20:31.600Z] 5800.00 IOPS, 45.31 MiB/s [2024-10-30T11:20:32.533Z] 5803.50 IOPS, 45.34 MiB/s [2024-10-30T11:20:33.466Z] 5814.80 IOPS, 45.43 MiB/s [2024-10-30T11:20:34.399Z] 5822.83 IOPS, 45.49 MiB/s [2024-10-30T11:20:35.332Z] 5829.71 IOPS, 45.54 MiB/s [2024-10-30T11:20:36.265Z] 5830.25 IOPS, 45.55 MiB/s [2024-10-30T11:20:37.640Z] 5831.00 IOPS, 45.55 MiB/s [2024-10-30T11:20:37.640Z] 5833.60 IOPS, 45.58 MiB/s 00:09:04.959 Latency(us) 00:09:04.959 [2024-10-30T11:20:37.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.959 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:04.959 Verification LBA range: start 0x0 length 0x1000 00:09:04.959 Nvme1n1 : 10.02 5835.81 45.59 0.00 0.00 21874.15 3956.43 30486.38 00:09:04.959 [2024-10-30T11:20:37.640Z] =================================================================================================================== 00:09:04.959 [2024-10-30T11:20:37.640Z] Total : 5835.81 45.59 0.00 0.00 21874.15 3956.43 30486.38 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=532376 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:04.959 { 00:09:04.959 "params": { 00:09:04.959 "name": "Nvme$subsystem", 00:09:04.959 "trtype": "$TEST_TRANSPORT", 00:09:04.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.959 "adrfam": "ipv4", 00:09:04.959 "trsvcid": "$NVMF_PORT", 00:09:04.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.959 "hdgst": ${hdgst:-false}, 00:09:04.959 "ddgst": ${ddgst:-false} 00:09:04.959 }, 00:09:04.959 "method": "bdev_nvme_attach_controller" 00:09:04.959 } 00:09:04.959 EOF 00:09:04.959 )") 00:09:04.959 [2024-10-30 12:20:37.494753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.494793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:04.959 12:20:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:04.959 "params": { 00:09:04.959 "name": "Nvme1", 00:09:04.959 "trtype": "tcp", 00:09:04.959 "traddr": "10.0.0.2", 00:09:04.959 "adrfam": "ipv4", 00:09:04.959 "trsvcid": "4420", 00:09:04.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:04.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:04.959 "hdgst": false, 00:09:04.959 "ddgst": false 00:09:04.959 }, 00:09:04.959 "method": "bdev_nvme_attach_controller" 00:09:04.959 }' 00:09:04.959 [2024-10-30 12:20:37.502728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.502751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.510749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.510769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.518768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.518787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.526790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.526810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.534812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.534832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.535929] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:09:04.959 [2024-10-30 12:20:37.536001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532376 ] 00:09:04.959 [2024-10-30 12:20:37.542830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.542850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.550854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.550874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.558874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.558894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.566899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.566919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.574920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.574940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.582937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.582956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.590959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.590978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.598978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.598998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.603665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.959 [2024-10-30 12:20:37.607000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.607020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.615051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.615090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.623057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.623087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.631066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.631085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.959 [2024-10-30 12:20:37.639093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.959 [2024-10-30 12:20:37.639115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.217 [2024-10-30 12:20:37.647108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.217 [2024-10-30 12:20:37.647128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.217 [2024-10-30 12:20:37.655129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.217 [2024-10-30 12:20:37.655149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.217 [2024-10-30 12:20:37.663152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.217 [2024-10-30 12:20:37.663172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.217 [2024-10-30 12:20:37.665661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.217 [2024-10-30 12:20:37.671176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.217 [2024-10-30 12:20:37.671196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.217 [2024-10-30 12:20:37.679196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.217 [2024-10-30 12:20:37.679219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.217 [2024-10-30 12:20:37.687252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.217 [2024-10-30 12:20:37.687295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.217 [2024-10-30 12:20:37.695285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.217 [2024-10-30 12:20:37.695322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.703320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.703358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.711339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.711376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.719349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.719387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.727359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.727398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.735358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.735379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.743404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.743441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.751427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.751467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.759449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.759498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.767448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.767469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.775468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.775490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.783492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.783513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.791527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.791554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.799557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.799581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.807570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.807594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.815602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.815624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.823621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.823641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.831646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.831669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.839664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.839685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.847686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.847706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.855789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.855815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.863776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.863799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.871806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.871831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.879812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.879832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.887917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.887944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.218 [2024-10-30 12:20:37.895925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.218 [2024-10-30 12:20:37.895948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 Running I/O for 5 seconds... 00:09:05.476 [2024-10-30 12:20:37.903956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:37.903993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:37.918958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:37.918986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:37.930151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:37.930179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:37.941208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:37.941235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:37.952595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:37.952621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:37.963850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:37.963878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:37.974969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:37.974998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:37.985889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:37.985916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:37.996807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:37.996833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.007580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.007607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.020477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.020504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.030783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.030809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.041674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.041700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.052520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.052547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.063227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.063277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.074064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.074090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.084664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.084691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.095197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.095223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.106082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.106109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.118527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.118569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.128727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.128754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.476 [2024-10-30 12:20:38.140060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.476 [2024-10-30 12:20:38.140087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.477 [2024-10-30 12:20:38.152924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.477 [2024-10-30 12:20:38.152950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.163447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.163474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.174079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.174105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.185046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.185074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.197904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.197945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.208301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.208328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.218683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.218710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.229462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.229490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.242419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.242447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.252743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.252771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.263367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.263393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.273915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.273958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.284472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.284500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.295238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.295275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.306168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.306196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.317391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.317418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.330700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.330728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.341026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.341053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.351977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.352005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.363026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.363052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.374220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.374270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.385336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.385363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.396280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.396308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.735 [2024-10-30 12:20:38.408422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.735 [2024-10-30 12:20:38.408449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.418897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.418924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.430090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.430116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.442971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.443012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.452780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.452807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.464358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.464385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.475405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.475432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.486340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.486367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.497430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.497457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.508627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.508654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.521757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.521783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.531905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.531931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.542963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.542997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.993 [2024-10-30 12:20:38.553957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.993 [2024-10-30 12:20:38.553983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.565025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.565051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.576062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.576089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.587046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.587072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.598091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.598116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.609046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.609072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.620027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.620054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.631076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.631102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.643948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.643974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.654405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.654431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.665175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.665201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.994 [2024-10-30 12:20:38.676115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.994 [2024-10-30 12:20:38.676142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.687128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.687154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.697982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.698007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.708904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.708931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.720050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.720076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.730725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.730752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.743358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.743385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.753490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.753524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.764409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.764436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.775181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.775207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.786449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.786476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.797077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.797104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.808169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.808195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.818959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.251 [2024-10-30 12:20:38.818985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.251 [2024-10-30 12:20:38.829488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.252 [2024-10-30 12:20:38.829516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.252 [2024-10-30 12:20:38.840413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.252 [2024-10-30 12:20:38.840440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.252 [2024-10-30 12:20:38.851570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.252 [2024-10-30 12:20:38.851597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.252 [2024-10-30 12:20:38.862814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.252 [2024-10-30 12:20:38.862840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.252 [2024-10-30 12:20:38.873921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.252 [2024-10-30 12:20:38.873947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.252 [2024-10-30 12:20:38.884909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.252 [2024-10-30 12:20:38.884935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.252 [2024-10-30 12:20:38.895988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.252 [2024-10-30 12:20:38.896014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.252 11556.00 IOPS, 90.28 MiB/s [2024-10-30T11:20:38.933Z] [2024-10-30 12:20:38.907662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.252 [2024-10-30 12:20:38.907688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.252 [2024-10-30 12:20:38.921467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.252 [2024-10-30 12:20:38.921494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.252 [2024-10-30 12:20:38.931837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.252 [2024-10-30 12:20:38.931864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:38.942832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:38.942858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:38.956057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:38.956083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:38.967945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:38.967981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:38.976935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:38.976961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:38.988487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:38.988514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.001191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.001218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.011646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.011673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.022340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.022368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.035065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.035092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.044932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.044959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.055592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.055619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.066413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.066440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.079378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.079405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.089181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.089208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.099308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.099335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.110416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.110442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.124096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.124123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.134552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.134592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.145649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.145676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.156468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.156495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.167652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.167678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.178373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.178400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.510 [2024-10-30 12:20:39.189004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.510 [2024-10-30 12:20:39.189031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.200006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.200033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.210792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.210819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.223818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.223846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.235676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.235704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.244438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.244465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.256219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.256246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.267081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.267109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.277911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.277939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.290749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.290776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.302592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.302619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.311843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.311871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.323880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.323908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.336352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.336380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.346578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.346604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.357165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.357191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.367955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.367982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.379220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.379247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.390222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.390249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.401136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.401164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.413644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.413672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.423966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.423994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.434724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.434751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.767 [2024-10-30 12:20:39.447648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.767 [2024-10-30 12:20:39.447675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.458061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.458089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.468956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.468983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.481453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.481480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.491802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.491829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.502497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.502524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.513145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.513171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.524135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.524162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.537374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.537402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.547751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.547777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.558388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.558415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.569209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.569250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.580107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.580133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.593144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.593170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.603064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.603090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.613652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.613678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.624558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.624584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.635604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.635632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.648250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.648285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.658766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.658806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.669519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.669546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.679898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.679924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.690861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.690887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.025 [2024-10-30 12:20:39.701517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.025 [2024-10-30 12:20:39.701544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.712504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.712531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.725185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.725211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.734534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.734577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.746291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.746318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.758978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.759005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.769156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.769182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.779823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.779850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.790818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.790844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.801489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.801516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.812292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.812319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.825208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.825250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.835374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.835401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.845661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.845688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.856549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.856575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.868576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.868603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.878021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.878047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.889540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.889581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.902288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.902315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 11674.50 IOPS, 91.21 MiB/s [2024-10-30T11:20:39.964Z] [2024-10-30 12:20:39.912109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.912136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.922956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.922982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.933589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.933615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.944191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.944217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.954756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.954782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.283 [2024-10-30 12:20:39.965546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.283 [2024-10-30 12:20:39.965572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-30 12:20:39.976474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-30 12:20:39.976502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-30 12:20:39.987205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-30 12:20:39.987231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-30 12:20:39.998644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-30 12:20:39.998671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-30 12:20:40.010290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-30 12:20:40.010333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-30 12:20:40.021325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-30 12:20:40.021353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-30 12:20:40.033582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-30 12:20:40.033620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-30 12:20:40.044565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-30 12:20:40.044594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-30 12:20:40.056761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-30 12:20:40.056789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-30 12:20:40.065632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.541 [2024-10-30 12:20:40.065660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.541 [2024-10-30 12:20:40.078051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.078079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.089100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.089142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.100227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.100278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.111356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.111384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.122710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.122756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.134247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.134283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.144933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.144960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.155918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.155946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.166638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.166665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.177714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.177756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.188495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.188522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.199658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.199686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.212725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.212752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.542 [2024-10-30 12:20:40.223020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.542 [2024-10-30 12:20:40.223071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-10-30 12:20:40.233422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.233449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.244348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.244376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.255500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.255527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.266140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.266166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.278550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.278577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.288812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.288839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.299579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.299605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.310431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.310458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.321552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.321593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.334352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.334380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.344769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.344796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.355659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.355686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.366482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.366509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.377767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.377794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.390528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.390570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.400914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.400954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.411869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.411896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.424599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.424625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.434449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.434484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.445731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.445757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.456626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.456653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.467599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.467626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.800 [2024-10-30 12:20:40.480512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.800 [2024-10-30 12:20:40.480541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.492654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.492681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.501723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.501749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.513391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.513418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.525832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.525859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.535203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.535230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.547456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.547484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.558011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.558037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.568793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.568835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.581401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.581429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.592167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.592194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.603755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.603791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.615347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.615374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.626900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.626928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.637770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.637797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.649073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.649100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.661979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.662006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.672376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.672404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.683671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.683697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.695010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.695037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.707684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.707710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.717989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.718015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.728653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.728679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-10-30 12:20:40.739591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-10-30 12:20:40.739617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.752669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.752696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.762831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.762858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.773166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.773192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.783905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.783933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.794738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.794763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.805372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.805400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.816156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.816181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.826886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.826912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.838650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.838676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.849912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.849939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.862607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.862634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.872911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.872937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.883934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.883959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.895012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.895038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.905863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.905891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 11649.33 IOPS, 91.01 MiB/s [2024-10-30T11:20:40.999Z] [2024-10-30 12:20:40.918686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.918712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.929097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.929123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.940318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.940346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.953287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.953313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.963388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.963415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.973831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.973858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.984534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.984576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.318 [2024-10-30 12:20:40.995476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.318 [2024-10-30 12:20:40.995504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.009471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.009498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.020123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.020149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.030850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.030878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.041499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.041527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.052286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.052314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.062991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.063026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.073953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.073980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.087386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.087414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.098114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.098142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.108752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.108779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.119634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.119661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.130025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.130051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.140571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.140599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.151316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.151345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.162101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.162127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.172810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.172837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.185959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.185986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.197508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.197536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.206552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.206594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.218319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.218348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.230779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.230807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.241144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.241172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.577 [2024-10-30 12:20:41.252479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.577 [2024-10-30 12:20:41.252506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.263179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.263208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.273764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.273802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.284028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.284056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.295110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.295138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.308281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.308309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.318655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.318683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.329512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.329541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.343101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.343129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.353353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.353382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.364244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.364280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.375447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.375476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.386735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.386764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.398086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.398114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.835 [2024-10-30 12:20:41.411189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.835 [2024-10-30 12:20:41.411217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.836 [2024-10-30 12:20:41.421856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.836 [2024-10-30 12:20:41.421882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.836 [2024-10-30 12:20:41.432834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.836 [2024-10-30 12:20:41.432862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.836 [2024-10-30 12:20:41.445734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.836 [2024-10-30 12:20:41.445761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.836 [2024-10-30 12:20:41.455792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.836 [2024-10-30 12:20:41.455820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.836 [2024-10-30 12:20:41.467004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.836 [2024-10-30 12:20:41.467032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.836 [2024-10-30 12:20:41.480159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.836 [2024-10-30 12:20:41.480188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.836 [2024-10-30 12:20:41.490490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.836 [2024-10-30 12:20:41.490527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.836 [2024-10-30 12:20:41.501142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.836 [2024-10-30 12:20:41.501169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.836 [2024-10-30 12:20:41.512034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.836 [2024-10-30 12:20:41.512061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.522713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.094 [2024-10-30 12:20:41.522741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.535477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.094 [2024-10-30 12:20:41.535506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.545418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.094 [2024-10-30 12:20:41.545447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.555828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.094 [2024-10-30 12:20:41.555855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.567101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.094 [2024-10-30 12:20:41.567129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.577675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.094 [2024-10-30 12:20:41.577704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.588212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.094 [2024-10-30 12:20:41.588254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.598825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.094 [2024-10-30 12:20:41.598853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.610034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.094 [2024-10-30 12:20:41.610062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.621169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.094 [2024-10-30 12:20:41.621196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.094 [2024-10-30 12:20:41.634589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.634616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.645047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.645074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.655719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.655747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.666322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.666350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.679005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.679035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.689329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.689359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.699971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.700022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.710799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.710826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.721712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.721740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.732379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.732408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.743136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.743164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.756172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.756200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.766252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.766290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.095 [2024-10-30 12:20:41.777164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.095 [2024-10-30 12:20:41.777191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.788446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.788474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.799180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.799223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.809647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.809673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.820711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.820738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.833042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.833068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.842560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.842602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.854348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.854376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.866890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.866918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.877332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.877360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.888000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.888027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.898852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.898879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.910061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.910097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 11674.50 IOPS, 91.21 MiB/s [2024-10-30T11:20:42.035Z] [2024-10-30 12:20:41.921073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.921100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.932400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.932428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.944622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.944650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.954115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.954142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.965389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.965418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.976090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.976118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.986768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.986796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:41.999738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:41.999766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:42.010024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:42.010050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:42.020851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:42.020878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.354 [2024-10-30 12:20:42.031899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.354 [2024-10-30 12:20:42.031925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.612 [2024-10-30 12:20:42.042843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.612 [2024-10-30 12:20:42.042872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.612 [2024-10-30 12:20:42.056607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.612 [2024-10-30 12:20:42.056636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.612 [2024-10-30 12:20:42.067053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.612 [2024-10-30 12:20:42.067082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.612 [2024-10-30 12:20:42.078004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.612 [2024-10-30 12:20:42.078033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.612 [2024-10-30 12:20:42.089017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.612 [2024-10-30 12:20:42.089045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.612 [2024-10-30 12:20:42.099674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.099702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.110810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.110838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.121587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.121614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.134676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.134704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.145021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.145049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.155709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.155736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.168254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.168291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.179798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.179825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.189751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.189778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.200686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.200714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.212055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.212083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.223298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.223327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.235893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.235920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.245078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.245106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.256462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.256490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.267207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.267235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.277955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.277983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.613 [2024-10-30 12:20:42.288611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.613 [2024-10-30 12:20:42.288638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.870 [2024-10-30 12:20:42.299439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.870 [2024-10-30 12:20:42.299467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.870 [2024-10-30 12:20:42.312041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.870 [2024-10-30 12:20:42.312068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.870 [2024-10-30 12:20:42.321851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.870 [2024-10-30 12:20:42.321894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.870 [2024-10-30 12:20:42.333125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.870 [2024-10-30 12:20:42.333153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.870 [2024-10-30 12:20:42.343814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.870 [2024-10-30 12:20:42.343842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.870 [2024-10-30 12:20:42.355000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.870 [2024-10-30 12:20:42.355028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.870 [2024-10-30 12:20:42.365793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.870 [2024-10-30 12:20:42.365821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.870 [2024-10-30 12:20:42.376457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.870 [2024-10-30 12:20:42.376487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.870 [2024-10-30 12:20:42.387781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.870 [2024-10-30 12:20:42.387809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.870 [2024-10-30 12:20:42.398838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.870 [2024-10-30 12:20:42.398865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.411835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.411865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.422010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.422038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.433003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.433030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.445896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.445924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.455921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.455948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.466771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.466799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.479557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.479600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.489971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.489998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.501141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.501169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.512000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.512028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.522814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.522842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.535449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.535487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.871 [2024-10-30 12:20:42.544884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.871 [2024-10-30 12:20:42.544912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.556751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.556779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.569703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.569732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.579914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.579943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.590759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.590787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.601450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.601479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.612375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.612404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.623206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.623234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.633665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.633694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.644474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.644504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.655406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.655435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.666478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.666507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.679060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.679088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.688880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.688907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.699994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.700022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.710936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.710965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.721707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.721735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.734071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.734099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.743839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.743874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.755308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.755351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.766153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.766180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.777147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.777175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.790034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.790062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.129 [2024-10-30 12:20:42.805894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.129 [2024-10-30 12:20:42.805923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.816422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.816451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.827451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.827479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.838541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.838585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.849969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.849997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.860697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.860725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.871883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.871911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.884572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.884601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.894628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.894657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.906076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.906104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 11684.40 IOPS, 91.28 MiB/s [2024-10-30T11:20:43.069Z] [2024-10-30 12:20:42.918121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.918148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.924646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.924670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 00:09:10.388 Latency(us) 00:09:10.388 [2024-10-30T11:20:43.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.388 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:10.388 Nvme1n1 : 5.01 11686.07 91.30 0.00 0.00 10939.80 4733.16 22233.69 00:09:10.388 [2024-10-30T11:20:43.069Z] =================================================================================================================== 00:09:10.388 [2024-10-30T11:20:43.069Z] Total : 11686.07 91.30 0.00 0.00 10939.80 4733.16 22233.69 00:09:10.388 [2024-10-30 12:20:42.932665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.932688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.940683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.940706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.948713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.948745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.956763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.956813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.964781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.964830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.972798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.972845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.980825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.980876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.988853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.988903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:42.996869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:42.996918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:43.004893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:43.004943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:43.012916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:43.012966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:43.020945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:43.020996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:43.028962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:43.029011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:43.036981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:43.037030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:43.045003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:43.045051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:43.053000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:43.053042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:43.060998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:43.061019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.388 [2024-10-30 12:20:43.069038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.388 [2024-10-30 12:20:43.069061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.647 [2024-10-30 12:20:43.077046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.647 [2024-10-30 12:20:43.077071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.647 [2024-10-30 12:20:43.085067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.647 [2024-10-30 12:20:43.085089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.647 [2024-10-30 12:20:43.093138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.647 [2024-10-30 12:20:43.093190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.647 [2024-10-30 12:20:43.101155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.647 [2024-10-30 12:20:43.101204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.647 [2024-10-30 12:20:43.109162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.647 [2024-10-30 12:20:43.109198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.647 [2024-10-30 12:20:43.117151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.647 [2024-10-30 12:20:43.117171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.647 [2024-10-30 12:20:43.125172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.647 [2024-10-30 12:20:43.125193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (532376) - No such process 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 532376 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.647 delay0 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.647 12:20:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:10.647 [2024-10-30 12:20:43.249007] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:17.198 Initializing NVMe Controllers 00:09:17.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:17.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:17.198 Initialization complete. Launching workers. 00:09:17.198 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1067 00:09:17.198 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1354, failed to submit 33 00:09:17.198 success 1211, unsuccessful 143, failed 0 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.198 rmmod nvme_tcp 00:09:17.198 rmmod nvme_fabrics 00:09:17.198 rmmod nvme_keyring 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 531028 ']' 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 531028 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 531028 ']' 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 531028 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 531028 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 531028' 00:09:17.198 killing process with pid 531028 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 531028 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 531028 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.198 12:20:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.738 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.738 00:09:19.738 real 0m27.856s 00:09:19.738 user 0m41.004s 00:09:19.738 sys 0m8.214s 00:09:19.738 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:19.738 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.738 ************************************ 00:09:19.738 END TEST nvmf_zcopy 00:09:19.738 ************************************ 00:09:19.738 12:20:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:19.738 12:20:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:19.738 12:20:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:19.738 12:20:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.739 ************************************ 00:09:19.739 START TEST nvmf_nmic 00:09:19.739 ************************************ 00:09:19.739 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:19.739 * Looking for test storage... 00:09:19.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.739 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:19.739 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:19.739 12:20:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.739 --rc genhtml_branch_coverage=1 00:09:19.739 --rc genhtml_function_coverage=1 00:09:19.739 --rc genhtml_legend=1 00:09:19.739 --rc geninfo_all_blocks=1 00:09:19.739 --rc geninfo_unexecuted_blocks=1 00:09:19.739 00:09:19.739 ' 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.739 --rc genhtml_branch_coverage=1 00:09:19.739 --rc genhtml_function_coverage=1 00:09:19.739 --rc genhtml_legend=1 00:09:19.739 --rc geninfo_all_blocks=1 00:09:19.739 --rc geninfo_unexecuted_blocks=1 00:09:19.739 00:09:19.739 ' 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.739 --rc genhtml_branch_coverage=1 00:09:19.739 --rc genhtml_function_coverage=1 00:09:19.739 --rc genhtml_legend=1 00:09:19.739 --rc geninfo_all_blocks=1 00:09:19.739 --rc geninfo_unexecuted_blocks=1 00:09:19.739 00:09:19.739 ' 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.739 --rc genhtml_branch_coverage=1 00:09:19.739 --rc genhtml_function_coverage=1 00:09:19.739 --rc genhtml_legend=1 00:09:19.739 --rc geninfo_all_blocks=1 00:09:19.739 --rc geninfo_unexecuted_blocks=1 00:09:19.739 00:09:19.739 ' 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.739 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.740 12:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:21.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.647 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:21.648 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:21.648 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:21.648 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.648 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:09:21.907 00:09:21.907 --- 10.0.0.2 ping statistics --- 00:09:21.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.907 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:09:21.907 00:09:21.907 --- 10.0.0.1 ping statistics --- 00:09:21.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.907 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=535774 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 535774 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 535774 ']' 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:21.907 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.907 [2024-10-30 12:20:54.483530] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:09:21.907 [2024-10-30 12:20:54.483630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.907 [2024-10-30 12:20:54.555754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.165 [2024-10-30 12:20:54.614717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.165 [2024-10-30 12:20:54.614771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.165 [2024-10-30 12:20:54.614795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.165 [2024-10-30 12:20:54.614805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.165 [2024-10-30 12:20:54.614815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.165 [2024-10-30 12:20:54.616301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.165 [2024-10-30 12:20:54.616361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.165 [2024-10-30 12:20:54.616428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.165 [2024-10-30 12:20:54.616431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.165 [2024-10-30 12:20:54.762789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.165 Malloc0 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.165 [2024-10-30 12:20:54.838234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:22.165 test case1: single bdev can't be used in multiple subsystems 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.165 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 [2024-10-30 12:20:54.862050] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:22.423 [2024-10-30 12:20:54.862080] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:22.423 [2024-10-30 12:20:54.862094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.423 request: 00:09:22.423 { 00:09:22.423 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:22.423 "namespace": { 00:09:22.423 "bdev_name": "Malloc0", 00:09:22.423 "no_auto_visible": false 00:09:22.423 }, 00:09:22.423 "method": "nvmf_subsystem_add_ns", 00:09:22.423 "req_id": 1 00:09:22.423 } 00:09:22.423 Got JSON-RPC error response 00:09:22.423 response: 00:09:22.423 { 00:09:22.423 "code": -32602, 00:09:22.423 "message": "Invalid parameters" 00:09:22.423 } 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:22.423 Adding namespace failed - expected result. 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:22.423 test case2: host connect to nvmf target in multiple paths 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 [2024-10-30 12:20:54.870160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.423 12:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.989 12:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:23.554 12:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.554 12:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:23.554 12:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.554 12:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:23.554 12:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:25.499 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:25.500 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:25.500 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.500 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:25.500 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.500 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:25.500 12:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:25.500 [global] 00:09:25.500 thread=1 00:09:25.500 invalidate=1 00:09:25.500 rw=write 00:09:25.500 time_based=1 00:09:25.500 runtime=1 00:09:25.500 ioengine=libaio 00:09:25.500 direct=1 00:09:25.500 bs=4096 00:09:25.500 iodepth=1 00:09:25.500 norandommap=0 00:09:25.500 numjobs=1 00:09:25.500 00:09:25.500 verify_dump=1 00:09:25.500 verify_backlog=512 00:09:25.500 verify_state_save=0 00:09:25.500 do_verify=1 00:09:25.500 verify=crc32c-intel 00:09:25.500 [job0] 00:09:25.500 filename=/dev/nvme0n1 00:09:25.500 Could not set queue depth (nvme0n1) 00:09:25.801 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:25.801 fio-3.35 00:09:25.801 Starting 1 thread 00:09:26.805 00:09:26.805 job0: (groupid=0, jobs=1): err= 0: pid=536294: Wed Oct 30 12:20:59 2024 00:09:26.805 read: IOPS=2079, BW=8320KiB/s (8519kB/s)(8328KiB/1001msec) 00:09:26.805 slat (nsec): min=5435, max=73938, avg=12959.33, stdev=8262.61 00:09:26.805 clat (usec): min=170, max=3559, avg=251.95, stdev=106.34 00:09:26.805 lat (usec): min=176, max=3566, avg=264.91, stdev=108.73 00:09:26.805 clat percentiles (usec): 00:09:26.805 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:09:26.805 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 229], 00:09:26.805 | 70.00th=[ 245], 80.00th=[ 285], 90.00th=[ 416], 95.00th=[ 433], 00:09:26.805 | 99.00th=[ 469], 99.50th=[ 482], 99.90th=[ 523], 99.95th=[ 586], 00:09:26.805 | 99.99th=[ 3556] 00:09:26.805 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:26.805 slat (usec): min=7, max=28516, avg=23.64, stdev=563.38 00:09:26.805 clat (usec): min=122, max=283, avg=145.19, stdev=11.66 00:09:26.805 lat (usec): min=130, max=28742, avg=168.83, stdev=565.14 00:09:26.805 clat percentiles (usec): 00:09:26.805 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 137], 00:09:26.805 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:09:26.805 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 167], 00:09:26.805 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 227], 99.95th=[ 227], 00:09:26.805 | 99.99th=[ 285] 00:09:26.805 bw ( KiB/s): min=10208, max=10208, per=99.79%, avg=10208.00, stdev= 0.00, samples=1 00:09:26.805 iops : min= 2552, max= 2552, avg=2552.00, stdev= 0.00, samples=1 00:09:26.805 lat (usec) : 250=87.27%, 500=12.65%, 750=0.06% 00:09:26.805 lat (msec) : 4=0.02% 00:09:26.805 cpu : usr=2.00%, sys=7.10%, ctx=4646, majf=0, minf=1 00:09:26.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.805 issued rwts: total=2082,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.805 00:09:26.805 Run status group 0 (all jobs): 00:09:26.805 READ: bw=8320KiB/s (8519kB/s), 8320KiB/s-8320KiB/s (8519kB/s-8519kB/s), io=8328KiB (8528kB), run=1001-1001msec 00:09:26.805 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:09:26.805 00:09:26.805 Disk stats (read/write): 00:09:26.805 nvme0n1: ios=2020/2048, merge=0/0, ticks=1474/283, in_queue=1757, util=98.50% 00:09:26.805 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:27.063 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.064 rmmod nvme_tcp 00:09:27.064 rmmod nvme_fabrics 00:09:27.064 rmmod nvme_keyring 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 535774 ']' 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 535774 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 535774 ']' 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 535774 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 535774 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 535774' 00:09:27.064 killing process with pid 535774 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 535774 00:09:27.064 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 535774 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.322 12:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.863 00:09:29.863 real 0m10.094s 00:09:29.863 user 0m22.125s 00:09:29.863 sys 0m2.701s 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.863 ************************************ 00:09:29.863 END TEST nvmf_nmic 00:09:29.863 ************************************ 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.863 ************************************ 00:09:29.863 START TEST nvmf_fio_target 00:09:29.863 ************************************ 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:29.863 * Looking for test storage... 00:09:29.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.863 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:29.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.864 --rc genhtml_branch_coverage=1 00:09:29.864 --rc genhtml_function_coverage=1 00:09:29.864 --rc genhtml_legend=1 00:09:29.864 --rc geninfo_all_blocks=1 00:09:29.864 --rc geninfo_unexecuted_blocks=1 00:09:29.864 00:09:29.864 ' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:29.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.864 --rc genhtml_branch_coverage=1 00:09:29.864 --rc genhtml_function_coverage=1 00:09:29.864 --rc genhtml_legend=1 00:09:29.864 --rc geninfo_all_blocks=1 00:09:29.864 --rc geninfo_unexecuted_blocks=1 00:09:29.864 00:09:29.864 ' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:29.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.864 --rc genhtml_branch_coverage=1 00:09:29.864 --rc genhtml_function_coverage=1 00:09:29.864 --rc genhtml_legend=1 00:09:29.864 --rc geninfo_all_blocks=1 00:09:29.864 --rc geninfo_unexecuted_blocks=1 00:09:29.864 00:09:29.864 ' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:29.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.864 --rc genhtml_branch_coverage=1 00:09:29.864 --rc genhtml_function_coverage=1 00:09:29.864 --rc genhtml_legend=1 00:09:29.864 --rc geninfo_all_blocks=1 00:09:29.864 --rc geninfo_unexecuted_blocks=1 00:09:29.864 00:09:29.864 ' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.864 12:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:31.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:31.770 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.770 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:31.770 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:31.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.771 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:32.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:09:32.030 00:09:32.030 --- 10.0.0.2 ping statistics --- 00:09:32.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.030 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:09:32.030 00:09:32.030 --- 10.0.0.1 ping statistics --- 00:09:32.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.030 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=538613 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 538613 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 538613 ']' 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:32.030 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.030 [2024-10-30 12:21:04.637452] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:09:32.030 [2024-10-30 12:21:04.637545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.031 [2024-10-30 12:21:04.711936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.289 [2024-10-30 12:21:04.772000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.289 [2024-10-30 12:21:04.772061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.289 [2024-10-30 12:21:04.772090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.289 [2024-10-30 12:21:04.772101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.289 [2024-10-30 12:21:04.772111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.289 [2024-10-30 12:21:04.773737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.289 [2024-10-30 12:21:04.773795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.289 [2024-10-30 12:21:04.773860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.289 [2024-10-30 12:21:04.773863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.289 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:32.289 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:32.289 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.289 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.289 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.289 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.289 12:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.547 [2024-10-30 12:21:05.225287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.804 12:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.062 12:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:33.062 12:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.320 12:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:33.320 12:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.579 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:33.579 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.837 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:33.837 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:34.095 12:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.661 12:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:34.661 12:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.918 12:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:34.918 12:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.176 12:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:35.176 12:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:35.434 12:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:35.692 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:35.692 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.949 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:35.949 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.207 12:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.465 [2024-10-30 12:21:09.100409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.465 12:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:36.723 12:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:36.981 12:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.913 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:37.913 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:37.913 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.913 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:37.913 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:37.913 12:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:39.812 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:39.812 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:39.812 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.812 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:39.812 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.812 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:39.812 12:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:39.812 [global] 00:09:39.812 thread=1 00:09:39.812 invalidate=1 00:09:39.812 rw=write 00:09:39.812 time_based=1 00:09:39.812 runtime=1 00:09:39.812 ioengine=libaio 00:09:39.812 direct=1 00:09:39.812 bs=4096 00:09:39.812 iodepth=1 00:09:39.812 norandommap=0 00:09:39.812 numjobs=1 00:09:39.812 00:09:39.812 verify_dump=1 00:09:39.812 verify_backlog=512 00:09:39.812 verify_state_save=0 00:09:39.812 do_verify=1 00:09:39.812 verify=crc32c-intel 00:09:39.812 [job0] 00:09:39.812 filename=/dev/nvme0n1 00:09:39.812 [job1] 00:09:39.812 filename=/dev/nvme0n2 00:09:39.812 [job2] 00:09:39.812 filename=/dev/nvme0n3 00:09:39.812 [job3] 00:09:39.812 filename=/dev/nvme0n4 00:09:39.812 Could not set queue depth (nvme0n1) 00:09:39.812 Could not set queue depth (nvme0n2) 00:09:39.812 Could not set queue depth (nvme0n3) 00:09:39.812 Could not set queue depth (nvme0n4) 00:09:40.070 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.070 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.070 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.070 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.070 fio-3.35 00:09:40.070 Starting 4 threads 00:09:41.467 00:09:41.467 job0: (groupid=0, jobs=1): err= 0: pid=540200: Wed Oct 30 12:21:13 2024 00:09:41.467 read: IOPS=1832, BW=7329KiB/s (7505kB/s)(7336KiB/1001msec) 00:09:41.467 slat (nsec): min=5729, max=36142, avg=11983.73, stdev=5164.73 00:09:41.467 clat (usec): min=212, max=41009, avg=286.67, stdev=951.67 00:09:41.467 lat (usec): min=218, max=41016, avg=298.65, stdev=951.63 00:09:41.467 clat percentiles (usec): 00:09:41.467 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:09:41.467 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:09:41.467 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 297], 00:09:41.467 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 371], 99.95th=[41157], 00:09:41.467 | 99.99th=[41157] 00:09:41.467 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:41.467 slat (nsec): min=7267, max=65373, avg=15904.08, stdev=7079.70 00:09:41.467 clat (usec): min=148, max=432, avg=197.46, stdev=29.05 00:09:41.467 lat (usec): min=156, max=453, avg=213.37, stdev=31.49 00:09:41.467 clat percentiles (usec): 00:09:41.467 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:09:41.467 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:09:41.467 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 227], 95.00th=[ 249], 00:09:41.467 | 99.00th=[ 326], 99.50th=[ 343], 99.90th=[ 379], 99.95th=[ 424], 00:09:41.467 | 99.99th=[ 433] 00:09:41.467 bw ( KiB/s): min= 8192, max= 8192, per=33.97%, avg=8192.00, stdev= 0.00, samples=1 00:09:41.467 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:41.467 lat (usec) : 250=63.81%, 500=36.17% 00:09:41.467 lat (msec) : 50=0.03% 00:09:41.467 cpu : usr=3.60%, sys=8.10%, ctx=3882, majf=0, minf=1 00:09:41.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.467 issued rwts: total=1834,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.467 job1: (groupid=0, jobs=1): err= 0: pid=540202: Wed Oct 30 12:21:13 2024 00:09:41.467 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:09:41.467 slat (nsec): min=8492, max=34074, avg=21104.41, stdev=8321.01 00:09:41.467 clat (usec): min=40874, max=41050, avg=40970.75, stdev=52.10 00:09:41.467 lat (usec): min=40895, max=41083, avg=40991.86, stdev=50.25 00:09:41.467 clat percentiles (usec): 00:09:41.467 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:41.467 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:41.467 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:41.467 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:41.467 | 99.99th=[41157] 00:09:41.467 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:09:41.467 slat (nsec): min=7181, max=63067, avg=12468.69, stdev=6460.78 00:09:41.467 clat (usec): min=137, max=340, avg=196.52, stdev=36.26 00:09:41.467 lat (usec): min=145, max=369, avg=208.99, stdev=36.47 00:09:41.467 clat percentiles (usec): 00:09:41.467 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:09:41.467 | 30.00th=[ 169], 40.00th=[ 182], 50.00th=[ 196], 60.00th=[ 210], 00:09:41.467 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 269], 00:09:41.467 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 343], 99.95th=[ 343], 00:09:41.467 | 99.99th=[ 343] 00:09:41.467 bw ( KiB/s): min= 4096, max= 4096, per=16.98%, avg=4096.00, stdev= 0.00, samples=1 00:09:41.467 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:41.467 lat (usec) : 250=89.70%, 500=6.18% 00:09:41.467 lat (msec) : 50=4.12% 00:09:41.467 cpu : usr=0.40%, sys=0.50%, ctx=535, majf=0, minf=1 00:09:41.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.467 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.467 job2: (groupid=0, jobs=1): err= 0: pid=540203: Wed Oct 30 12:21:13 2024 00:09:41.467 read: IOPS=1762, BW=7049KiB/s (7218kB/s)(7056KiB/1001msec) 00:09:41.467 slat (nsec): min=5754, max=48134, avg=12672.98, stdev=5378.49 00:09:41.467 clat (usec): min=213, max=618, avg=288.70, stdev=63.07 00:09:41.467 lat (usec): min=219, max=625, avg=301.37, stdev=63.28 00:09:41.467 clat percentiles (usec): 00:09:41.467 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:09:41.467 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:09:41.467 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 330], 95.00th=[ 433], 00:09:41.467 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 603], 99.95th=[ 619], 00:09:41.467 | 99.99th=[ 619] 00:09:41.467 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:41.467 slat (nsec): min=7740, max=58541, avg=16530.03, stdev=7134.20 00:09:41.467 clat (usec): min=162, max=1271, avg=204.69, stdev=35.12 00:09:41.467 lat (usec): min=172, max=1292, avg=221.22, stdev=37.27 00:09:41.467 clat percentiles (usec): 00:09:41.467 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:09:41.467 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:09:41.467 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 241], 00:09:41.467 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 619], 99.95th=[ 807], 00:09:41.467 | 99.99th=[ 1270] 00:09:41.467 bw ( KiB/s): min= 8192, max= 8192, per=33.97%, avg=8192.00, stdev= 0.00, samples=1 00:09:41.467 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:41.467 lat (usec) : 250=61.78%, 500=37.01%, 750=1.15%, 1000=0.03% 00:09:41.467 lat (msec) : 2=0.03% 00:09:41.467 cpu : usr=5.00%, sys=6.70%, ctx=3813, majf=0, minf=1 00:09:41.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.467 issued rwts: total=1764,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.467 job3: (groupid=0, jobs=1): err= 0: pid=540204: Wed Oct 30 12:21:13 2024 00:09:41.467 read: IOPS=1015, BW=4063KiB/s (4160kB/s)(4140KiB/1019msec) 00:09:41.467 slat (nsec): min=6209, max=42761, avg=12505.90, stdev=6096.59 00:09:41.467 clat (usec): min=198, max=41210, avg=641.47, stdev=3989.61 00:09:41.467 lat (usec): min=206, max=41228, avg=653.97, stdev=3990.20 00:09:41.467 clat percentiles (usec): 00:09:41.467 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:09:41.467 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 253], 00:09:41.467 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:09:41.468 | 99.00th=[ 1582], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:41.468 | 99.99th=[41157] 00:09:41.468 write: IOPS=1507, BW=6029KiB/s (6174kB/s)(6144KiB/1019msec); 0 zone resets 00:09:41.468 slat (nsec): min=7973, max=73985, avg=16056.28, stdev=7956.41 00:09:41.468 clat (usec): min=139, max=3028, avg=199.73, stdev=85.09 00:09:41.468 lat (usec): min=147, max=3045, avg=215.78, stdev=87.19 00:09:41.468 clat percentiles (usec): 00:09:41.468 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 163], 00:09:41.468 | 30.00th=[ 172], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 202], 00:09:41.468 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 273], 00:09:41.468 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 502], 99.95th=[ 3032], 00:09:41.468 | 99.99th=[ 3032] 00:09:41.468 bw ( KiB/s): min= 3696, max= 8592, per=25.48%, avg=6144.00, stdev=3461.99, samples=2 00:09:41.468 iops : min= 924, max= 2148, avg=1536.00, stdev=865.50, samples=2 00:09:41.468 lat (usec) : 250=77.79%, 500=21.63%, 750=0.08% 00:09:41.468 lat (msec) : 2=0.08%, 4=0.04%, 50=0.39% 00:09:41.468 cpu : usr=2.65%, sys=4.81%, ctx=2574, majf=0, minf=1 00:09:41.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.468 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.468 00:09:41.468 Run status group 0 (all jobs): 00:09:41.468 READ: bw=17.8MiB/s (18.7MB/s), 87.0KiB/s-7329KiB/s (89.1kB/s-7505kB/s), io=18.2MiB (19.1MB), run=1001-1019msec 00:09:41.468 WRITE: bw=23.6MiB/s (24.7MB/s), 2026KiB/s-8184KiB/s (2074kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1019msec 00:09:41.468 00:09:41.468 Disk stats (read/write): 00:09:41.468 nvme0n1: ios=1586/1773, merge=0/0, ticks=441/333, in_queue=774, util=86.77% 00:09:41.468 nvme0n2: ios=42/512, merge=0/0, ticks=1723/100, in_queue=1823, util=98.37% 00:09:41.468 nvme0n3: ios=1594/1691, merge=0/0, ticks=1249/330, in_queue=1579, util=98.23% 00:09:41.468 nvme0n4: ios=1053/1536, merge=0/0, ticks=1389/283, in_queue=1672, util=98.21% 00:09:41.468 12:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:41.468 [global] 00:09:41.468 thread=1 00:09:41.468 invalidate=1 00:09:41.468 rw=randwrite 00:09:41.468 time_based=1 00:09:41.468 runtime=1 00:09:41.468 ioengine=libaio 00:09:41.468 direct=1 00:09:41.468 bs=4096 00:09:41.468 iodepth=1 00:09:41.468 norandommap=0 00:09:41.468 numjobs=1 00:09:41.468 00:09:41.468 verify_dump=1 00:09:41.468 verify_backlog=512 00:09:41.468 verify_state_save=0 00:09:41.468 do_verify=1 00:09:41.468 verify=crc32c-intel 00:09:41.468 [job0] 00:09:41.468 filename=/dev/nvme0n1 00:09:41.468 [job1] 00:09:41.468 filename=/dev/nvme0n2 00:09:41.468 [job2] 00:09:41.468 filename=/dev/nvme0n3 00:09:41.468 [job3] 00:09:41.468 filename=/dev/nvme0n4 00:09:41.468 Could not set queue depth (nvme0n1) 00:09:41.468 Could not set queue depth (nvme0n2) 00:09:41.468 Could not set queue depth (nvme0n3) 00:09:41.468 Could not set queue depth (nvme0n4) 00:09:41.468 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.468 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.468 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.468 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.468 fio-3.35 00:09:41.468 Starting 4 threads 00:09:42.840 00:09:42.840 job0: (groupid=0, jobs=1): err= 0: pid=540440: Wed Oct 30 12:21:15 2024 00:09:42.840 read: IOPS=2158, BW=8635KiB/s (8843kB/s)(8644KiB/1001msec) 00:09:42.840 slat (nsec): min=5070, max=44361, avg=11083.96, stdev=5450.63 00:09:42.840 clat (usec): min=172, max=2176, avg=215.32, stdev=57.69 00:09:42.840 lat (usec): min=179, max=2187, avg=226.41, stdev=58.96 00:09:42.840 clat percentiles (usec): 00:09:42.840 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:09:42.840 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:09:42.840 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 310], 00:09:42.840 | 99.00th=[ 388], 99.50th=[ 412], 99.90th=[ 502], 99.95th=[ 553], 00:09:42.840 | 99.99th=[ 2180] 00:09:42.840 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:42.840 slat (nsec): min=6562, max=49014, avg=13488.40, stdev=5249.85 00:09:42.840 clat (usec): min=129, max=2169, avg=179.67, stdev=66.99 00:09:42.840 lat (usec): min=136, max=2176, avg=193.15, stdev=66.24 00:09:42.840 clat percentiles (usec): 00:09:42.840 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:42.840 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:09:42.840 | 70.00th=[ 180], 80.00th=[ 204], 90.00th=[ 237], 95.00th=[ 247], 00:09:42.840 | 99.00th=[ 318], 99.50th=[ 379], 99.90th=[ 750], 99.95th=[ 2089], 00:09:42.840 | 99.99th=[ 2180] 00:09:42.840 bw ( KiB/s): min=10776, max=10776, per=36.65%, avg=10776.00, stdev= 0.00, samples=1 00:09:42.840 iops : min= 2694, max= 2694, avg=2694.00, stdev= 0.00, samples=1 00:09:42.840 lat (usec) : 250=94.60%, 500=5.25%, 750=0.06%, 1000=0.02% 00:09:42.840 lat (msec) : 4=0.06% 00:09:42.840 cpu : usr=2.80%, sys=6.50%, ctx=4722, majf=0, minf=1 00:09:42.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.840 issued rwts: total=2161,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.841 job1: (groupid=0, jobs=1): err= 0: pid=540441: Wed Oct 30 12:21:15 2024 00:09:42.841 read: IOPS=29, BW=120KiB/s (123kB/s)(120KiB/1002msec) 00:09:42.841 slat (nsec): min=9374, max=33392, avg=18536.70, stdev=6934.74 00:09:42.841 clat (usec): min=253, max=41075, avg=28766.85, stdev=18868.28 00:09:42.841 lat (usec): min=268, max=41092, avg=28785.38, stdev=18871.03 00:09:42.841 clat percentiles (usec): 00:09:42.841 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 396], 20.00th=[ 478], 00:09:42.841 | 30.00th=[ 562], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:09:42.841 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:42.841 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:42.841 | 99.99th=[41157] 00:09:42.841 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:42.841 slat (nsec): min=6255, max=39078, avg=9911.62, stdev=4493.77 00:09:42.841 clat (usec): min=126, max=823, avg=256.11, stdev=89.35 00:09:42.841 lat (usec): min=133, max=833, avg=266.02, stdev=89.59 00:09:42.841 clat percentiles (usec): 00:09:42.841 | 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 198], 00:09:42.841 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 245], 00:09:42.841 | 70.00th=[ 258], 80.00th=[ 322], 90.00th=[ 396], 95.00th=[ 420], 00:09:42.841 | 99.00th=[ 453], 99.50th=[ 668], 99.90th=[ 824], 99.95th=[ 824], 00:09:42.841 | 99.99th=[ 824] 00:09:42.841 bw ( KiB/s): min= 4096, max= 4096, per=13.93%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.841 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.841 lat (usec) : 250=61.99%, 500=32.66%, 750=1.29%, 1000=0.18% 00:09:42.841 lat (msec) : 50=3.87% 00:09:42.841 cpu : usr=0.30%, sys=0.50%, ctx=544, majf=0, minf=1 00:09:42.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.841 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.841 job2: (groupid=0, jobs=1): err= 0: pid=540443: Wed Oct 30 12:21:15 2024 00:09:42.841 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:42.841 slat (nsec): min=5801, max=42828, avg=12509.45, stdev=5108.76 00:09:42.841 clat (usec): min=194, max=625, avg=241.20, stdev=23.58 00:09:42.841 lat (usec): min=200, max=641, avg=253.71, stdev=26.69 00:09:42.841 clat percentiles (usec): 00:09:42.841 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:09:42.841 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 249], 00:09:42.841 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 273], 00:09:42.841 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 338], 99.95th=[ 441], 00:09:42.841 | 99.99th=[ 627] 00:09:42.841 write: IOPS=2243, BW=8975KiB/s (9190kB/s)(8984KiB/1001msec); 0 zone resets 00:09:42.841 slat (nsec): min=7215, max=56545, avg=16432.17, stdev=6667.26 00:09:42.841 clat (usec): min=148, max=389, avg=189.40, stdev=19.56 00:09:42.841 lat (usec): min=159, max=411, avg=205.84, stdev=22.83 00:09:42.841 clat percentiles (usec): 00:09:42.841 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:09:42.841 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:09:42.841 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 221], 00:09:42.841 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 359], 99.95th=[ 388], 00:09:42.841 | 99.99th=[ 388] 00:09:42.841 bw ( KiB/s): min= 9512, max= 9512, per=32.35%, avg=9512.00, stdev= 0.00, samples=1 00:09:42.841 iops : min= 2378, max= 2378, avg=2378.00, stdev= 0.00, samples=1 00:09:42.841 lat (usec) : 250=82.84%, 500=17.14%, 750=0.02% 00:09:42.841 cpu : usr=4.20%, sys=8.90%, ctx=4295, majf=0, minf=2 00:09:42.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.841 issued rwts: total=2048,2246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.841 job3: (groupid=0, jobs=1): err= 0: pid=540447: Wed Oct 30 12:21:15 2024 00:09:42.841 read: IOPS=1706, BW=6825KiB/s (6989kB/s)(6832KiB/1001msec) 00:09:42.841 slat (nsec): min=5943, max=62560, avg=14229.74, stdev=6718.59 00:09:42.841 clat (usec): min=208, max=1051, avg=292.06, stdev=74.51 00:09:42.841 lat (usec): min=215, max=1062, avg=306.29, stdev=77.36 00:09:42.841 clat percentiles (usec): 00:09:42.841 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 247], 00:09:42.841 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:09:42.841 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 383], 95.00th=[ 457], 00:09:42.841 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 963], 99.95th=[ 1057], 00:09:42.841 | 99.99th=[ 1057] 00:09:42.841 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:42.841 slat (nsec): min=7618, max=63702, avg=16159.95, stdev=7202.81 00:09:42.841 clat (usec): min=154, max=663, avg=208.60, stdev=41.72 00:09:42.841 lat (usec): min=162, max=685, avg=224.76, stdev=45.56 00:09:42.841 clat percentiles (usec): 00:09:42.841 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:09:42.841 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 210], 00:09:42.841 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 265], 00:09:42.841 | 99.00th=[ 379], 99.50th=[ 388], 99.90th=[ 586], 99.95th=[ 594], 00:09:42.841 | 99.99th=[ 668] 00:09:42.841 bw ( KiB/s): min= 8344, max= 8344, per=28.38%, avg=8344.00, stdev= 0.00, samples=1 00:09:42.841 iops : min= 2086, max= 2086, avg=2086.00, stdev= 0.00, samples=1 00:09:42.841 lat (usec) : 250=61.85%, 500=36.34%, 750=1.70%, 1000=0.08% 00:09:42.841 lat (msec) : 2=0.03% 00:09:42.841 cpu : usr=4.20%, sys=7.90%, ctx=3759, majf=0, minf=2 00:09:42.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.841 issued rwts: total=1708,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.841 00:09:42.841 Run status group 0 (all jobs): 00:09:42.841 READ: bw=23.2MiB/s (24.3MB/s), 120KiB/s-8635KiB/s (123kB/s-8843kB/s), io=23.2MiB (24.4MB), run=1001-1002msec 00:09:42.841 WRITE: bw=28.7MiB/s (30.1MB/s), 2044KiB/s-9.99MiB/s (2093kB/s-10.5MB/s), io=28.8MiB (30.2MB), run=1001-1002msec 00:09:42.841 00:09:42.841 Disk stats (read/write): 00:09:42.841 nvme0n1: ios=2046/2048, merge=0/0, ticks=640/354, in_queue=994, util=100.00% 00:09:42.841 nvme0n2: ios=67/512, merge=0/0, ticks=1699/127, in_queue=1826, util=97.97% 00:09:42.841 nvme0n3: ios=1670/2048, merge=0/0, ticks=1167/367, in_queue=1534, util=99.90% 00:09:42.841 nvme0n4: ios=1559/1697, merge=0/0, ticks=1359/343, in_queue=1702, util=98.11% 00:09:42.841 12:21:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:42.841 [global] 00:09:42.841 thread=1 00:09:42.841 invalidate=1 00:09:42.841 rw=write 00:09:42.841 time_based=1 00:09:42.841 runtime=1 00:09:42.841 ioengine=libaio 00:09:42.841 direct=1 00:09:42.841 bs=4096 00:09:42.841 iodepth=128 00:09:42.841 norandommap=0 00:09:42.841 numjobs=1 00:09:42.841 00:09:42.841 verify_dump=1 00:09:42.841 verify_backlog=512 00:09:42.841 verify_state_save=0 00:09:42.841 do_verify=1 00:09:42.841 verify=crc32c-intel 00:09:42.841 [job0] 00:09:42.841 filename=/dev/nvme0n1 00:09:42.841 [job1] 00:09:42.841 filename=/dev/nvme0n2 00:09:42.841 [job2] 00:09:42.841 filename=/dev/nvme0n3 00:09:42.841 [job3] 00:09:42.841 filename=/dev/nvme0n4 00:09:42.841 Could not set queue depth (nvme0n1) 00:09:42.841 Could not set queue depth (nvme0n2) 00:09:42.841 Could not set queue depth (nvme0n3) 00:09:42.841 Could not set queue depth (nvme0n4) 00:09:42.841 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.841 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.841 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.841 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.841 fio-3.35 00:09:42.841 Starting 4 threads 00:09:44.216 00:09:44.216 job0: (groupid=0, jobs=1): err= 0: pid=540672: Wed Oct 30 12:21:16 2024 00:09:44.216 read: IOPS=2452, BW=9812KiB/s (10.0MB/s)(9900KiB/1009msec) 00:09:44.216 slat (usec): min=3, max=19880, avg=237.17, stdev=1205.41 00:09:44.216 clat (usec): min=1697, max=92913, avg=30873.97, stdev=17768.99 00:09:44.216 lat (usec): min=12534, max=92925, avg=31111.15, stdev=17827.56 00:09:44.217 clat percentiles (usec): 00:09:44.217 | 1.00th=[14877], 5.00th=[17171], 10.00th=[18220], 20.00th=[19530], 00:09:44.217 | 30.00th=[20055], 40.00th=[20055], 50.00th=[20317], 60.00th=[21627], 00:09:44.217 | 70.00th=[34866], 80.00th=[46924], 90.00th=[55313], 95.00th=[66847], 00:09:44.217 | 99.00th=[89654], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:09:44.217 | 99.99th=[92799] 00:09:44.217 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:09:44.217 slat (usec): min=3, max=12305, avg=148.84, stdev=914.68 00:09:44.217 clat (usec): min=887, max=102832, avg=20155.82, stdev=14312.14 00:09:44.217 lat (usec): min=894, max=102846, avg=20304.66, stdev=14363.14 00:09:44.217 clat percentiles (msec): 00:09:44.217 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 14], 00:09:44.217 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:09:44.217 | 70.00th=[ 17], 80.00th=[ 23], 90.00th=[ 36], 95.00th=[ 48], 00:09:44.217 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 104], 99.95th=[ 104], 00:09:44.217 | 99.99th=[ 104] 00:09:44.217 bw ( KiB/s): min= 8360, max=12120, per=20.32%, avg=10240.00, stdev=2658.72, samples=2 00:09:44.217 iops : min= 2090, max= 3030, avg=2560.00, stdev=664.68, samples=2 00:09:44.217 lat (usec) : 1000=0.22% 00:09:44.217 lat (msec) : 2=0.18%, 10=0.10%, 20=54.48%, 50=33.88%, 100=11.02% 00:09:44.217 lat (msec) : 250=0.12% 00:09:44.217 cpu : usr=4.56%, sys=5.36%, ctx=238, majf=0, minf=2 00:09:44.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:09:44.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.217 issued rwts: total=2475,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.217 job1: (groupid=0, jobs=1): err= 0: pid=540673: Wed Oct 30 12:21:16 2024 00:09:44.217 read: IOPS=2875, BW=11.2MiB/s (11.8MB/s)(11.7MiB/1044msec) 00:09:44.217 slat (usec): min=3, max=15598, avg=146.45, stdev=919.82 00:09:44.217 clat (usec): min=9174, max=55799, avg=20621.30, stdev=8543.91 00:09:44.217 lat (usec): min=9179, max=57633, avg=20767.75, stdev=8588.77 00:09:44.217 clat percentiles (usec): 00:09:44.217 | 1.00th=[10421], 5.00th=[12518], 10.00th=[13042], 20.00th=[13435], 00:09:44.217 | 30.00th=[16188], 40.00th=[17695], 50.00th=[19006], 60.00th=[20055], 00:09:44.217 | 70.00th=[21890], 80.00th=[24511], 90.00th=[30802], 95.00th=[32375], 00:09:44.217 | 99.00th=[54789], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:09:44.217 | 99.99th=[55837] 00:09:44.217 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1044msec); 0 zone resets 00:09:44.217 slat (usec): min=3, max=9025, avg=169.91, stdev=659.47 00:09:44.217 clat (usec): min=9221, max=51913, avg=22894.48, stdev=8752.93 00:09:44.217 lat (usec): min=9231, max=51955, avg=23064.39, stdev=8806.68 00:09:44.217 clat percentiles (usec): 00:09:44.217 | 1.00th=[ 9241], 5.00th=[11076], 10.00th=[11207], 20.00th=[19268], 00:09:44.217 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20579], 60.00th=[20841], 00:09:44.217 | 70.00th=[23200], 80.00th=[29492], 90.00th=[35914], 95.00th=[41157], 00:09:44.217 | 99.00th=[49546], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:09:44.217 | 99.99th=[52167] 00:09:44.217 bw ( KiB/s): min=12288, max=12288, per=24.39%, avg=12288.00, stdev= 0.00, samples=2 00:09:44.217 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:44.217 lat (msec) : 10=1.43%, 20=45.88%, 50=51.19%, 100=1.50% 00:09:44.217 cpu : usr=5.27%, sys=7.29%, ctx=411, majf=0, minf=1 00:09:44.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:44.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.217 issued rwts: total=3002,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.217 job2: (groupid=0, jobs=1): err= 0: pid=540674: Wed Oct 30 12:21:16 2024 00:09:44.217 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:09:44.217 slat (usec): min=4, max=8800, avg=135.94, stdev=762.12 00:09:44.217 clat (usec): min=9120, max=32987, avg=16322.18, stdev=4163.11 00:09:44.217 lat (usec): min=9129, max=33006, avg=16458.12, stdev=4230.74 00:09:44.217 clat percentiles (usec): 00:09:44.217 | 1.00th=[10159], 5.00th=[11469], 10.00th=[12649], 20.00th=[13435], 00:09:44.217 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14877], 60.00th=[16057], 00:09:44.217 | 70.00th=[17433], 80.00th=[20055], 90.00th=[21627], 95.00th=[24773], 00:09:44.217 | 99.00th=[31851], 99.50th=[32375], 99.90th=[32900], 99.95th=[32900], 00:09:44.217 | 99.99th=[32900] 00:09:44.217 write: IOPS=3399, BW=13.3MiB/s (13.9MB/s)(13.4MiB/1007msec); 0 zone resets 00:09:44.217 slat (usec): min=4, max=40980, avg=157.43, stdev=968.97 00:09:44.217 clat (usec): min=5396, max=53561, avg=20154.85, stdev=6786.78 00:09:44.217 lat (usec): min=6394, max=76206, avg=20312.27, stdev=6895.64 00:09:44.217 clat percentiles (usec): 00:09:44.217 | 1.00th=[10683], 5.00th=[12518], 10.00th=[12649], 20.00th=[14091], 00:09:44.217 | 30.00th=[15795], 40.00th=[18744], 50.00th=[19792], 60.00th=[20317], 00:09:44.217 | 70.00th=[20579], 80.00th=[23987], 90.00th=[31065], 95.00th=[34866], 00:09:44.217 | 99.00th=[39060], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:09:44.217 | 99.99th=[53740] 00:09:44.217 bw ( KiB/s): min=12288, max=14080, per=26.17%, avg=13184.00, stdev=1267.14, samples=2 00:09:44.217 iops : min= 3072, max= 3520, avg=3296.00, stdev=316.78, samples=2 00:09:44.217 lat (msec) : 10=0.86%, 20=66.07%, 50=33.06%, 100=0.02% 00:09:44.217 cpu : usr=4.77%, sys=10.04%, ctx=394, majf=0, minf=1 00:09:44.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:44.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.217 issued rwts: total=3072,3423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.217 job3: (groupid=0, jobs=1): err= 0: pid=540675: Wed Oct 30 12:21:16 2024 00:09:44.217 read: IOPS=3893, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1009msec) 00:09:44.217 slat (usec): min=3, max=10954, avg=112.80, stdev=703.51 00:09:44.217 clat (usec): min=3777, max=37163, avg=13745.25, stdev=5005.24 00:09:44.217 lat (usec): min=5745, max=37176, avg=13858.05, stdev=5051.90 00:09:44.217 clat percentiles (usec): 00:09:44.217 | 1.00th=[ 8160], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10683], 00:09:44.217 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12911], 00:09:44.217 | 70.00th=[13829], 80.00th=[15401], 90.00th=[21103], 95.00th=[26084], 00:09:44.217 | 99.00th=[32637], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:09:44.217 | 99.99th=[36963] 00:09:44.217 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:09:44.217 slat (usec): min=4, max=9318, avg=124.97, stdev=547.49 00:09:44.217 clat (usec): min=4418, max=37137, avg=18012.37, stdev=6529.01 00:09:44.217 lat (usec): min=4436, max=37157, avg=18137.34, stdev=6572.72 00:09:44.217 clat percentiles (usec): 00:09:44.217 | 1.00th=[ 5145], 5.00th=[ 8160], 10.00th=[ 9634], 20.00th=[11076], 00:09:44.217 | 30.00th=[11731], 40.00th=[18482], 50.00th=[19792], 60.00th=[20055], 00:09:44.217 | 70.00th=[21103], 80.00th=[23725], 90.00th=[27919], 95.00th=[28443], 00:09:44.217 | 99.00th=[28705], 99.50th=[28705], 99.90th=[30016], 99.95th=[33424], 00:09:44.217 | 99.99th=[36963] 00:09:44.217 bw ( KiB/s): min=16384, max=16384, per=32.52%, avg=16384.00, stdev= 0.00, samples=2 00:09:44.217 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:44.217 lat (msec) : 4=0.01%, 10=10.99%, 20=60.41%, 50=28.59% 00:09:44.217 cpu : usr=5.56%, sys=10.62%, ctx=458, majf=0, minf=1 00:09:44.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:44.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.217 issued rwts: total=3929,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.217 00:09:44.217 Run status group 0 (all jobs): 00:09:44.217 READ: bw=46.7MiB/s (49.0MB/s), 9812KiB/s-15.2MiB/s (10.0MB/s-15.9MB/s), io=48.7MiB (51.1MB), run=1007-1044msec 00:09:44.217 WRITE: bw=49.2MiB/s (51.6MB/s), 9.91MiB/s-15.9MiB/s (10.4MB/s-16.6MB/s), io=51.4MiB (53.9MB), run=1007-1044msec 00:09:44.217 00:09:44.217 Disk stats (read/write): 00:09:44.217 nvme0n1: ios=2140/2560, merge=0/0, ticks=14541/18017, in_queue=32558, util=87.37% 00:09:44.217 nvme0n2: ios=2604/2671, merge=0/0, ticks=24186/30232, in_queue=54418, util=97.56% 00:09:44.217 nvme0n3: ios=2620/2863, merge=0/0, ticks=22249/26202, in_queue=48451, util=95.84% 00:09:44.217 nvme0n4: ios=3129/3391, merge=0/0, ticks=41216/62845, in_queue=104061, util=98.43% 00:09:44.217 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:44.217 [global] 00:09:44.217 thread=1 00:09:44.217 invalidate=1 00:09:44.217 rw=randwrite 00:09:44.217 time_based=1 00:09:44.217 runtime=1 00:09:44.217 ioengine=libaio 00:09:44.217 direct=1 00:09:44.217 bs=4096 00:09:44.217 iodepth=128 00:09:44.217 norandommap=0 00:09:44.217 numjobs=1 00:09:44.217 00:09:44.217 verify_dump=1 00:09:44.217 verify_backlog=512 00:09:44.217 verify_state_save=0 00:09:44.217 do_verify=1 00:09:44.217 verify=crc32c-intel 00:09:44.217 [job0] 00:09:44.217 filename=/dev/nvme0n1 00:09:44.217 [job1] 00:09:44.217 filename=/dev/nvme0n2 00:09:44.217 [job2] 00:09:44.217 filename=/dev/nvme0n3 00:09:44.217 [job3] 00:09:44.217 filename=/dev/nvme0n4 00:09:44.217 Could not set queue depth (nvme0n1) 00:09:44.217 Could not set queue depth (nvme0n2) 00:09:44.217 Could not set queue depth (nvme0n3) 00:09:44.217 Could not set queue depth (nvme0n4) 00:09:44.475 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.475 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.475 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.475 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.475 fio-3.35 00:09:44.475 Starting 4 threads 00:09:45.850 00:09:45.850 job0: (groupid=0, jobs=1): err= 0: pid=541026: Wed Oct 30 12:21:18 2024 00:09:45.850 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:09:45.850 slat (usec): min=2, max=21532, avg=145.22, stdev=1045.81 00:09:45.850 clat (usec): min=1786, max=90079, avg=18892.22, stdev=14979.79 00:09:45.850 lat (usec): min=1793, max=90085, avg=19037.45, stdev=15054.80 00:09:45.850 clat percentiles (usec): 00:09:45.850 | 1.00th=[ 4293], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[11076], 00:09:45.850 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13435], 60.00th=[16057], 00:09:45.850 | 70.00th=[18744], 80.00th=[21890], 90.00th=[32113], 95.00th=[50594], 00:09:45.850 | 99.00th=[87557], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:09:45.850 | 99.99th=[89654] 00:09:45.850 write: IOPS=2888, BW=11.3MiB/s (11.8MB/s)(11.4MiB/1011msec); 0 zone resets 00:09:45.850 slat (usec): min=3, max=21638, avg=198.87, stdev=1182.01 00:09:45.850 clat (usec): min=806, max=187781, avg=27400.66, stdev=30026.03 00:09:45.850 lat (usec): min=812, max=187802, avg=27599.54, stdev=30180.52 00:09:45.850 clat percentiles (msec): 00:09:45.850 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:09:45.850 | 30.00th=[ 12], 40.00th=[ 16], 50.00th=[ 24], 60.00th=[ 24], 00:09:45.850 | 70.00th=[ 25], 80.00th=[ 31], 90.00th=[ 45], 95.00th=[ 88], 00:09:45.850 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 188], 99.95th=[ 188], 00:09:45.850 | 99.99th=[ 188] 00:09:45.850 bw ( KiB/s): min= 8192, max=14144, per=17.42%, avg=11168.00, stdev=4208.70, samples=2 00:09:45.850 iops : min= 2048, max= 3536, avg=2792.00, stdev=1052.17, samples=2 00:09:45.850 lat (usec) : 1000=0.05% 00:09:45.850 lat (msec) : 2=0.40%, 4=0.66%, 10=10.44%, 20=45.27%, 50=35.99% 00:09:45.850 lat (msec) : 100=4.87%, 250=2.32% 00:09:45.850 cpu : usr=2.57%, sys=4.95%, ctx=377, majf=0, minf=1 00:09:45.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:45.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.851 issued rwts: total=2560,2920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.851 job1: (groupid=0, jobs=1): err= 0: pid=541027: Wed Oct 30 12:21:18 2024 00:09:45.851 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:09:45.851 slat (usec): min=3, max=26175, avg=94.57, stdev=606.21 00:09:45.851 clat (usec): min=5912, max=69012, avg=12761.80, stdev=8223.34 00:09:45.851 lat (usec): min=6685, max=69035, avg=12856.36, stdev=8256.15 00:09:45.851 clat percentiles (usec): 00:09:45.851 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10290], 00:09:45.851 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:09:45.851 | 70.00th=[11469], 80.00th=[11863], 90.00th=[13435], 95.00th=[21365], 00:09:45.851 | 99.00th=[64226], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:09:45.851 | 99.99th=[68682] 00:09:45.851 write: IOPS=5603, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1003msec); 0 zone resets 00:09:45.851 slat (usec): min=4, max=3828, avg=80.36, stdev=361.54 00:09:45.851 clat (usec): min=2685, max=18590, avg=10922.49, stdev=1910.09 00:09:45.851 lat (usec): min=2691, max=18601, avg=11002.85, stdev=1903.53 00:09:45.851 clat percentiles (usec): 00:09:45.851 | 1.00th=[ 6325], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9503], 00:09:45.851 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10814], 60.00th=[11207], 00:09:45.851 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12518], 95.00th=[15926], 00:09:45.851 | 99.00th=[16712], 99.50th=[16909], 99.90th=[16909], 99.95th=[18482], 00:09:45.851 | 99.99th=[18482] 00:09:45.851 bw ( KiB/s): min=19368, max=24576, per=34.27%, avg=21972.00, stdev=3682.61, samples=2 00:09:45.851 iops : min= 4842, max= 6144, avg=5493.00, stdev=920.65, samples=2 00:09:45.851 lat (msec) : 4=0.12%, 10=22.90%, 20=74.39%, 50=1.71%, 100=0.88% 00:09:45.851 cpu : usr=8.28%, sys=12.28%, ctx=530, majf=0, minf=1 00:09:45.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:45.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.851 issued rwts: total=5120,5620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.851 job2: (groupid=0, jobs=1): err= 0: pid=541028: Wed Oct 30 12:21:18 2024 00:09:45.851 read: IOPS=4501, BW=17.6MiB/s (18.4MB/s)(17.7MiB/1004msec) 00:09:45.851 slat (usec): min=2, max=21004, avg=111.19, stdev=759.59 00:09:45.851 clat (usec): min=1495, max=47540, avg=14246.48, stdev=5385.94 00:09:45.851 lat (usec): min=5082, max=47580, avg=14357.67, stdev=5442.22 00:09:45.851 clat percentiles (usec): 00:09:45.851 | 1.00th=[ 5473], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11863], 00:09:45.851 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:09:45.851 | 70.00th=[12911], 80.00th=[15795], 90.00th=[22414], 95.00th=[24249], 00:09:45.851 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[40109], 00:09:45.851 | 99.99th=[47449] 00:09:45.851 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:09:45.851 slat (usec): min=3, max=17317, avg=94.44, stdev=635.42 00:09:45.851 clat (usec): min=356, max=42816, avg=13651.76, stdev=5373.49 00:09:45.851 lat (usec): min=398, max=42834, avg=13746.21, stdev=5427.67 00:09:45.851 clat percentiles (usec): 00:09:45.851 | 1.00th=[ 2802], 5.00th=[ 7177], 10.00th=[10290], 20.00th=[11600], 00:09:45.851 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[12780], 00:09:45.851 | 70.00th=[12911], 80.00th=[14091], 90.00th=[21627], 95.00th=[21890], 00:09:45.851 | 99.00th=[38011], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:09:45.851 | 99.99th=[42730] 00:09:45.851 bw ( KiB/s): min=17592, max=19272, per=28.75%, avg=18432.00, stdev=1187.94, samples=2 00:09:45.851 iops : min= 4398, max= 4818, avg=4608.00, stdev=296.98, samples=2 00:09:45.851 lat (usec) : 500=0.04%, 750=0.03%, 1000=0.08% 00:09:45.851 lat (msec) : 2=0.20%, 4=0.72%, 10=6.70%, 20=79.05%, 50=13.17% 00:09:45.851 cpu : usr=5.08%, sys=10.57%, ctx=446, majf=0, minf=1 00:09:45.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:45.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.851 issued rwts: total=4520,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.851 job3: (groupid=0, jobs=1): err= 0: pid=541029: Wed Oct 30 12:21:18 2024 00:09:45.851 read: IOPS=2630, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1012msec) 00:09:45.851 slat (usec): min=2, max=17125, avg=147.26, stdev=967.60 00:09:45.851 clat (usec): min=8073, max=40028, avg=18478.46, stdev=6182.39 00:09:45.851 lat (usec): min=8085, max=40039, avg=18625.72, stdev=6244.86 00:09:45.851 clat percentiles (usec): 00:09:45.851 | 1.00th=[10159], 5.00th=[11863], 10.00th=[12518], 20.00th=[13304], 00:09:45.851 | 30.00th=[14484], 40.00th=[15270], 50.00th=[16319], 60.00th=[18220], 00:09:45.851 | 70.00th=[22414], 80.00th=[23725], 90.00th=[24773], 95.00th=[28967], 00:09:45.851 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:09:45.851 | 99.99th=[40109] 00:09:45.851 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec); 0 zone resets 00:09:45.851 slat (usec): min=3, max=24953, avg=179.92, stdev=1071.28 00:09:45.851 clat (usec): min=2146, max=68687, avg=25730.76, stdev=12939.00 00:09:45.851 lat (usec): min=2158, max=68707, avg=25910.68, stdev=13008.03 00:09:45.851 clat percentiles (usec): 00:09:45.851 | 1.00th=[ 5735], 5.00th=[10290], 10.00th=[12256], 20.00th=[14877], 00:09:45.851 | 30.00th=[21103], 40.00th=[21890], 50.00th=[22676], 60.00th=[23725], 00:09:45.851 | 70.00th=[26346], 80.00th=[32113], 90.00th=[47973], 95.00th=[54264], 00:09:45.851 | 99.00th=[63177], 99.50th=[65799], 99.90th=[67634], 99.95th=[68682], 00:09:45.851 | 99.99th=[68682] 00:09:45.851 bw ( KiB/s): min=10808, max=13560, per=19.00%, avg=12184.00, stdev=1945.96, samples=2 00:09:45.851 iops : min= 2702, max= 3390, avg=3046.00, stdev=486.49, samples=2 00:09:45.851 lat (msec) : 4=0.28%, 10=2.81%, 20=41.82%, 50=50.26%, 100=4.83% 00:09:45.851 cpu : usr=2.77%, sys=6.73%, ctx=338, majf=0, minf=1 00:09:45.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:45.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.851 issued rwts: total=2662,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.851 00:09:45.851 Run status group 0 (all jobs): 00:09:45.851 READ: bw=57.4MiB/s (60.2MB/s), 9.89MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=58.1MiB (60.9MB), run=1003-1012msec 00:09:45.851 WRITE: bw=62.6MiB/s (65.6MB/s), 11.3MiB/s-21.9MiB/s (11.8MB/s-22.9MB/s), io=63.4MiB (66.4MB), run=1003-1012msec 00:09:45.851 00:09:45.851 Disk stats (read/write): 00:09:45.852 nvme0n1: ios=2364/2560, merge=0/0, ticks=28347/66777, in_queue=95124, util=86.97% 00:09:45.852 nvme0n2: ios=4506/4608, merge=0/0, ticks=15129/11231, in_queue=26360, util=98.27% 00:09:45.852 nvme0n3: ios=3625/4047, merge=0/0, ticks=32137/34394, in_queue=66531, util=97.19% 00:09:45.852 nvme0n4: ios=2097/2560, merge=0/0, ticks=25964/42818, in_queue=68782, util=97.06% 00:09:45.852 12:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:45.852 12:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=541165 00:09:45.852 12:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:45.852 12:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:45.852 [global] 00:09:45.852 thread=1 00:09:45.852 invalidate=1 00:09:45.852 rw=read 00:09:45.852 time_based=1 00:09:45.852 runtime=10 00:09:45.852 ioengine=libaio 00:09:45.852 direct=1 00:09:45.852 bs=4096 00:09:45.852 iodepth=1 00:09:45.852 norandommap=1 00:09:45.852 numjobs=1 00:09:45.852 00:09:45.852 [job0] 00:09:45.852 filename=/dev/nvme0n1 00:09:45.852 [job1] 00:09:45.852 filename=/dev/nvme0n2 00:09:45.852 [job2] 00:09:45.852 filename=/dev/nvme0n3 00:09:45.852 [job3] 00:09:45.852 filename=/dev/nvme0n4 00:09:45.852 Could not set queue depth (nvme0n1) 00:09:45.852 Could not set queue depth (nvme0n2) 00:09:45.852 Could not set queue depth (nvme0n3) 00:09:45.852 Could not set queue depth (nvme0n4) 00:09:45.852 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.852 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.852 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.852 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.852 fio-3.35 00:09:45.852 Starting 4 threads 00:09:49.131 12:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:49.131 12:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:49.131 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=311296, buflen=4096 00:09:49.131 fio: pid=541256, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.388 12:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.388 12:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:49.388 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7020544, buflen=4096 00:09:49.388 fio: pid=541255, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.648 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.648 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:49.648 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=565248, buflen=4096 00:09:49.648 fio: pid=541253, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.907 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.907 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:49.907 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=684032, buflen=4096 00:09:49.907 fio: pid=541254, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.907 00:09:49.907 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=541253: Wed Oct 30 12:21:22 2024 00:09:49.907 read: IOPS=38, BW=154KiB/s (158kB/s)(552KiB/3582msec) 00:09:49.907 slat (usec): min=12, max=11794, avg=147.68, stdev=1112.30 00:09:49.907 clat (usec): min=208, max=43046, avg=25636.34, stdev=19810.26 00:09:49.907 lat (usec): min=224, max=52861, avg=25784.85, stdev=19855.33 00:09:49.907 clat percentiles (usec): 00:09:49.907 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 235], 00:09:49.907 | 30.00th=[ 260], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:09:49.907 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:49.907 | 99.00th=[41681], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:49.907 | 99.99th=[43254] 00:09:49.907 bw ( KiB/s): min= 96, max= 480, per=7.55%, avg=164.00, stdev=154.94, samples=6 00:09:49.907 iops : min= 24, max= 120, avg=41.00, stdev=38.73, samples=6 00:09:49.907 lat (usec) : 250=27.34%, 500=10.07% 00:09:49.907 lat (msec) : 50=61.87% 00:09:49.907 cpu : usr=0.17%, sys=0.00%, ctx=143, majf=0, minf=2 00:09:49.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.907 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.907 issued rwts: total=139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.907 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=541254: Wed Oct 30 12:21:22 2024 00:09:49.907 read: IOPS=43, BW=173KiB/s (177kB/s)(668KiB/3857msec) 00:09:49.907 slat (usec): min=7, max=2944, avg=39.99, stdev=225.65 00:09:49.907 clat (usec): min=233, max=41953, avg=22906.61, stdev=20239.52 00:09:49.907 lat (usec): min=246, max=44012, avg=22946.61, stdev=20252.73 00:09:49.907 clat percentiles (usec): 00:09:49.907 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 265], 00:09:49.907 | 30.00th=[ 310], 40.00th=[ 351], 50.00th=[40633], 60.00th=[40633], 00:09:49.907 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:49.907 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:49.907 | 99.99th=[42206] 00:09:49.907 bw ( KiB/s): min= 112, max= 320, per=8.24%, avg=179.57, stdev=70.47, samples=7 00:09:49.907 iops : min= 28, max= 80, avg=44.86, stdev=17.66, samples=7 00:09:49.907 lat (usec) : 250=7.74%, 500=36.31% 00:09:49.907 lat (msec) : 50=55.36% 00:09:49.907 cpu : usr=0.05%, sys=0.16%, ctx=173, majf=0, minf=1 00:09:49.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.907 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.907 issued rwts: total=168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.907 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=541255: Wed Oct 30 12:21:22 2024 00:09:49.907 read: IOPS=519, BW=2077KiB/s (2127kB/s)(6856KiB/3301msec) 00:09:49.907 slat (nsec): min=4928, max=58376, avg=13627.41, stdev=7184.64 00:09:49.907 clat (usec): min=174, max=41223, avg=1894.96, stdev=8066.01 00:09:49.907 lat (usec): min=179, max=41255, avg=1908.59, stdev=8067.77 00:09:49.907 clat percentiles (usec): 00:09:49.907 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:09:49.907 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 215], 00:09:49.907 | 70.00th=[ 233], 80.00th=[ 249], 90.00th=[ 293], 95.00th=[ 412], 00:09:49.907 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:49.907 | 99.99th=[41157] 00:09:49.907 bw ( KiB/s): min= 96, max= 6496, per=100.00%, avg=2276.00, stdev=2441.30, samples=6 00:09:49.907 iops : min= 24, max= 1624, avg=569.00, stdev=610.33, samples=6 00:09:49.907 lat (usec) : 250=80.58%, 500=14.81%, 750=0.35% 00:09:49.907 lat (msec) : 2=0.06%, 50=4.14% 00:09:49.907 cpu : usr=0.36%, sys=0.88%, ctx=1715, majf=0, minf=2 00:09:49.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.907 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.907 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.907 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=541256: Wed Oct 30 12:21:22 2024 00:09:49.907 read: IOPS=25, BW=102KiB/s (104kB/s)(304KiB/2981msec) 00:09:49.907 slat (nsec): min=12466, max=34501, avg=23435.94, stdev=8626.64 00:09:49.907 clat (usec): min=351, max=41468, avg=38826.07, stdev=9120.71 00:09:49.907 lat (usec): min=367, max=41501, avg=38849.60, stdev=9121.04 00:09:49.907 clat percentiles (usec): 00:09:49.907 | 1.00th=[ 351], 5.00th=[ 469], 10.00th=[41157], 20.00th=[41157], 00:09:49.907 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:49.907 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:49.907 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:49.907 | 99.99th=[41681] 00:09:49.907 bw ( KiB/s): min= 96, max= 128, per=4.79%, avg=104.00, stdev=13.86, samples=5 00:09:49.907 iops : min= 24, max= 32, avg=26.00, stdev= 3.46, samples=5 00:09:49.907 lat (usec) : 500=5.19% 00:09:49.907 lat (msec) : 50=93.51% 00:09:49.907 cpu : usr=0.00%, sys=0.10%, ctx=78, majf=0, minf=2 00:09:49.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.907 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.907 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.907 00:09:49.907 Run status group 0 (all jobs): 00:09:49.907 READ: bw=2173KiB/s (2225kB/s), 102KiB/s-2077KiB/s (104kB/s-2127kB/s), io=8380KiB (8581kB), run=2981-3857msec 00:09:49.907 00:09:49.907 Disk stats (read/write): 00:09:49.907 nvme0n1: ios=163/0, merge=0/0, ticks=3399/0, in_queue=3399, util=95.79% 00:09:49.907 nvme0n2: ios=210/0, merge=0/0, ticks=4916/0, in_queue=4916, util=99.27% 00:09:49.907 nvme0n3: ios=1709/0, merge=0/0, ticks=3027/0, in_queue=3027, util=96.79% 00:09:49.907 nvme0n4: ios=73/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.75% 00:09:50.166 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.166 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:50.425 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.425 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:50.684 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.684 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:50.942 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.942 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:51.200 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:51.200 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 541165 00:09:51.200 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:51.200 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.456 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.456 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:51.457 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:51.457 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.457 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:51.457 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.457 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:51.457 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:51.457 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:51.457 nvmf hotplug test: fio failed as expected 00:09:51.457 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.713 rmmod nvme_tcp 00:09:51.713 rmmod nvme_fabrics 00:09:51.713 rmmod nvme_keyring 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 538613 ']' 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 538613 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 538613 ']' 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 538613 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:51.713 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 538613 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 538613' 00:09:51.971 killing process with pid 538613 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 538613 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 538613 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.971 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.229 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.229 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.229 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.229 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.229 12:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.135 00:09:54.135 real 0m24.633s 00:09:54.135 user 1m27.506s 00:09:54.135 sys 0m6.740s 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.135 ************************************ 00:09:54.135 END TEST nvmf_fio_target 00:09:54.135 ************************************ 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.135 ************************************ 00:09:54.135 START TEST nvmf_bdevio 00:09:54.135 ************************************ 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:54.135 * Looking for test storage... 00:09:54.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:54.135 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.394 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.395 --rc genhtml_branch_coverage=1 00:09:54.395 --rc genhtml_function_coverage=1 00:09:54.395 --rc genhtml_legend=1 00:09:54.395 --rc geninfo_all_blocks=1 00:09:54.395 --rc geninfo_unexecuted_blocks=1 00:09:54.395 00:09:54.395 ' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.395 --rc genhtml_branch_coverage=1 00:09:54.395 --rc genhtml_function_coverage=1 00:09:54.395 --rc genhtml_legend=1 00:09:54.395 --rc geninfo_all_blocks=1 00:09:54.395 --rc geninfo_unexecuted_blocks=1 00:09:54.395 00:09:54.395 ' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.395 --rc genhtml_branch_coverage=1 00:09:54.395 --rc genhtml_function_coverage=1 00:09:54.395 --rc genhtml_legend=1 00:09:54.395 --rc geninfo_all_blocks=1 00:09:54.395 --rc geninfo_unexecuted_blocks=1 00:09:54.395 00:09:54.395 ' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.395 --rc genhtml_branch_coverage=1 00:09:54.395 --rc genhtml_function_coverage=1 00:09:54.395 --rc genhtml_legend=1 00:09:54.395 --rc geninfo_all_blocks=1 00:09:54.395 --rc geninfo_unexecuted_blocks=1 00:09:54.395 00:09:54.395 ' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.395 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:56.300 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:56.300 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:56.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:56.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.300 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.301 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.559 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.559 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:56.559 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:56.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:09:56.559 00:09:56.559 --- 10.0.0.2 ping statistics --- 00:09:56.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.559 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:09:56.559 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:09:56.559 00:09:56.559 --- 10.0.0.1 ping statistics --- 00:09:56.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.559 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:09:56.559 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.559 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:56.559 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.559 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.560 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.560 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.560 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.560 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.560 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=543896 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 543896 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 543896 ']' 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:56.560 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.560 [2024-10-30 12:21:29.074864] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:09:56.560 [2024-10-30 12:21:29.074963] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.560 [2024-10-30 12:21:29.148704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.560 [2024-10-30 12:21:29.209927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.560 [2024-10-30 12:21:29.209987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.560 [2024-10-30 12:21:29.210014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.560 [2024-10-30 12:21:29.210025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.560 [2024-10-30 12:21:29.210035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.560 [2024-10-30 12:21:29.211756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:56.560 [2024-10-30 12:21:29.211821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:56.560 [2024-10-30 12:21:29.211887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:56.560 [2024-10-30 12:21:29.211890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.818 [2024-10-30 12:21:29.357953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.818 Malloc0 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.818 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.819 [2024-10-30 12:21:29.425332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:56.819 { 00:09:56.819 "params": { 00:09:56.819 "name": "Nvme$subsystem", 00:09:56.819 "trtype": "$TEST_TRANSPORT", 00:09:56.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.819 "adrfam": "ipv4", 00:09:56.819 "trsvcid": "$NVMF_PORT", 00:09:56.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.819 "hdgst": ${hdgst:-false}, 00:09:56.819 "ddgst": ${ddgst:-false} 00:09:56.819 }, 00:09:56.819 "method": "bdev_nvme_attach_controller" 00:09:56.819 } 00:09:56.819 EOF 00:09:56.819 )") 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:56.819 12:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:56.819 "params": { 00:09:56.819 "name": "Nvme1", 00:09:56.819 "trtype": "tcp", 00:09:56.819 "traddr": "10.0.0.2", 00:09:56.819 "adrfam": "ipv4", 00:09:56.819 "trsvcid": "4420", 00:09:56.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.819 "hdgst": false, 00:09:56.819 "ddgst": false 00:09:56.819 }, 00:09:56.819 "method": "bdev_nvme_attach_controller" 00:09:56.819 }' 00:09:56.819 [2024-10-30 12:21:29.471728] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:09:56.819 [2024-10-30 12:21:29.471800] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544046 ] 00:09:57.078 [2024-10-30 12:21:29.541302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:57.078 [2024-10-30 12:21:29.605923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.078 [2024-10-30 12:21:29.605974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.078 [2024-10-30 12:21:29.605977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.334 I/O targets: 00:09:57.334 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:57.334 00:09:57.334 00:09:57.334 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.334 http://cunit.sourceforge.net/ 00:09:57.334 00:09:57.334 00:09:57.334 Suite: bdevio tests on: Nvme1n1 00:09:57.334 Test: blockdev write read block ...passed 00:09:57.334 Test: blockdev write zeroes read block ...passed 00:09:57.334 Test: blockdev write zeroes read no split ...passed 00:09:57.591 Test: blockdev write zeroes read split ...passed 00:09:57.591 Test: blockdev write zeroes read split partial ...passed 00:09:57.591 Test: blockdev reset ...[2024-10-30 12:21:30.029692] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:57.591 [2024-10-30 12:21:30.029815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1414640 (9): Bad file descriptor 00:09:57.591 [2024-10-30 12:21:30.177591] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:57.591 passed 00:09:57.591 Test: blockdev write read 8 blocks ...passed 00:09:57.591 Test: blockdev write read size > 128k ...passed 00:09:57.591 Test: blockdev write read invalid size ...passed 00:09:57.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:57.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:57.591 Test: blockdev write read max offset ...passed 00:09:57.849 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:57.849 Test: blockdev writev readv 8 blocks ...passed 00:09:57.849 Test: blockdev writev readv 30 x 1block ...passed 00:09:57.849 Test: blockdev writev readv block ...passed 00:09:57.849 Test: blockdev writev readv size > 128k ...passed 00:09:57.849 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:57.849 Test: blockdev comparev and writev ...[2024-10-30 12:21:30.351352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.849 [2024-10-30 12:21:30.351389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:57.849 [2024-10-30 12:21:30.351413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.849 [2024-10-30 12:21:30.351431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:57.849 [2024-10-30 12:21:30.351848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.849 [2024-10-30 12:21:30.351873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:57.849 [2024-10-30 12:21:30.351895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.849 [2024-10-30 12:21:30.351911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:57.849 [2024-10-30 12:21:30.352326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.849 [2024-10-30 12:21:30.352350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:57.849 [2024-10-30 12:21:30.352371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.849 [2024-10-30 12:21:30.352393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:57.849 [2024-10-30 12:21:30.352791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.849 [2024-10-30 12:21:30.352814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:57.849 [2024-10-30 12:21:30.352835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.849 [2024-10-30 12:21:30.352851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:57.849 passed 00:09:57.849 Test: blockdev nvme passthru rw ...passed 00:09:57.849 Test: blockdev nvme passthru vendor specific ...[2024-10-30 12:21:30.436537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.849 [2024-10-30 12:21:30.436564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:57.849 [2024-10-30 12:21:30.436719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.849 [2024-10-30 12:21:30.436742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:57.849 [2024-10-30 12:21:30.436903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.849 [2024-10-30 12:21:30.436935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:57.849 [2024-10-30 12:21:30.437101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.849 [2024-10-30 12:21:30.437126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:57.849 passed 00:09:57.849 Test: blockdev nvme admin passthru ...passed 00:09:57.849 Test: blockdev copy ...passed 00:09:57.849 00:09:57.849 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.849 suites 1 1 n/a 0 0 00:09:57.849 tests 23 23 23 0 0 00:09:57.849 asserts 152 152 152 0 n/a 00:09:57.849 00:09:57.849 Elapsed time = 1.149 seconds 00:09:58.107 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.107 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.107 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.107 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.107 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:58.107 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:58.107 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.107 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.108 rmmod nvme_tcp 00:09:58.108 rmmod nvme_fabrics 00:09:58.108 rmmod nvme_keyring 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 543896 ']' 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 543896 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 543896 ']' 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 543896 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 543896 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 543896' 00:09:58.108 killing process with pid 543896 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 543896 00:09:58.108 12:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 543896 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.367 12:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.907 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.907 00:10:00.907 real 0m6.318s 00:10:00.907 user 0m10.335s 00:10:00.907 sys 0m2.069s 00:10:00.907 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:00.907 12:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.907 ************************************ 00:10:00.907 END TEST nvmf_bdevio 00:10:00.908 ************************************ 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:00.908 00:10:00.908 real 3m55.028s 00:10:00.908 user 10m16.735s 00:10:00.908 sys 1m6.143s 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.908 ************************************ 00:10:00.908 END TEST nvmf_target_core 00:10:00.908 ************************************ 00:10:00.908 12:21:33 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:00.908 12:21:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:00.908 12:21:33 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:00.908 12:21:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:00.908 ************************************ 00:10:00.908 START TEST nvmf_target_extra 00:10:00.908 ************************************ 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:00.908 * Looking for test storage... 00:10:00.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:00.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.908 --rc genhtml_branch_coverage=1 00:10:00.908 --rc genhtml_function_coverage=1 00:10:00.908 --rc genhtml_legend=1 00:10:00.908 --rc geninfo_all_blocks=1 00:10:00.908 --rc geninfo_unexecuted_blocks=1 00:10:00.908 00:10:00.908 ' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:00.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.908 --rc genhtml_branch_coverage=1 00:10:00.908 --rc genhtml_function_coverage=1 00:10:00.908 --rc genhtml_legend=1 00:10:00.908 --rc geninfo_all_blocks=1 00:10:00.908 --rc geninfo_unexecuted_blocks=1 00:10:00.908 00:10:00.908 ' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:00.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.908 --rc genhtml_branch_coverage=1 00:10:00.908 --rc genhtml_function_coverage=1 00:10:00.908 --rc genhtml_legend=1 00:10:00.908 --rc geninfo_all_blocks=1 00:10:00.908 --rc geninfo_unexecuted_blocks=1 00:10:00.908 00:10:00.908 ' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:00.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.908 --rc genhtml_branch_coverage=1 00:10:00.908 --rc genhtml_function_coverage=1 00:10:00.908 --rc genhtml_legend=1 00:10:00.908 --rc geninfo_all_blocks=1 00:10:00.908 --rc geninfo_unexecuted_blocks=1 00:10:00.908 00:10:00.908 ' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:00.908 12:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:00.909 ************************************ 00:10:00.909 START TEST nvmf_example 00:10:00.909 ************************************ 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:00.909 * Looking for test storage... 00:10:00.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:00.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.909 --rc genhtml_branch_coverage=1 00:10:00.909 --rc genhtml_function_coverage=1 00:10:00.909 --rc genhtml_legend=1 00:10:00.909 --rc geninfo_all_blocks=1 00:10:00.909 --rc geninfo_unexecuted_blocks=1 00:10:00.909 00:10:00.909 ' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:00.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.909 --rc genhtml_branch_coverage=1 00:10:00.909 --rc genhtml_function_coverage=1 00:10:00.909 --rc genhtml_legend=1 00:10:00.909 --rc geninfo_all_blocks=1 00:10:00.909 --rc geninfo_unexecuted_blocks=1 00:10:00.909 00:10:00.909 ' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:00.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.909 --rc genhtml_branch_coverage=1 00:10:00.909 --rc genhtml_function_coverage=1 00:10:00.909 --rc genhtml_legend=1 00:10:00.909 --rc geninfo_all_blocks=1 00:10:00.909 --rc geninfo_unexecuted_blocks=1 00:10:00.909 00:10:00.909 ' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:00.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.909 --rc genhtml_branch_coverage=1 00:10:00.909 --rc genhtml_function_coverage=1 00:10:00.909 --rc genhtml_legend=1 00:10:00.909 --rc geninfo_all_blocks=1 00:10:00.909 --rc geninfo_unexecuted_blocks=1 00:10:00.909 00:10:00.909 ' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:00.909 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.910 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:03.443 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.443 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.443 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.443 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.443 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.443 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:03.444 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:03.444 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:03.444 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:03.444 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:10:03.444 00:10:03.444 --- 10.0.0.2 ping statistics --- 00:10:03.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.444 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:10:03.444 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:10:03.444 00:10:03.444 --- 10.0.0.1 ping statistics --- 00:10:03.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.444 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=546187 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 546187 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 546187 ']' 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:03.445 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.378 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.378 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:04.378 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:04.378 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.378 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.378 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.378 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.378 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.378 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.378 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:04.378 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.378 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:04.638 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:16.837 Initializing NVMe Controllers 00:10:16.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:16.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:16.837 Initialization complete. Launching workers. 00:10:16.837 ======================================================== 00:10:16.837 Latency(us) 00:10:16.837 Device Information : IOPS MiB/s Average min max 00:10:16.837 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14359.60 56.09 4458.25 894.54 15302.90 00:10:16.837 ======================================================== 00:10:16.837 Total : 14359.60 56.09 4458.25 894.54 15302.90 00:10:16.837 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.837 rmmod nvme_tcp 00:10:16.837 rmmod nvme_fabrics 00:10:16.837 rmmod nvme_keyring 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 546187 ']' 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 546187 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 546187 ']' 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 546187 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 546187 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 546187' 00:10:16.837 killing process with pid 546187 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 546187 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 546187 00:10:16.837 nvmf threads initialize successfully 00:10:16.837 bdev subsystem init successfully 00:10:16.837 created a nvmf target service 00:10:16.837 create targets's poll groups done 00:10:16.837 all subsystems of target started 00:10:16.837 nvmf target is running 00:10:16.837 all subsystems of target stopped 00:10:16.837 destroy targets's poll groups done 00:10:16.837 destroyed the nvmf target service 00:10:16.837 bdev subsystem finish successfully 00:10:16.837 nvmf threads destroy successfully 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.837 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.406 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.407 00:10:17.407 real 0m16.533s 00:10:17.407 user 0m45.510s 00:10:17.407 sys 0m3.856s 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.407 ************************************ 00:10:17.407 END TEST nvmf_example 00:10:17.407 ************************************ 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:17.407 ************************************ 00:10:17.407 START TEST nvmf_filesystem 00:10:17.407 ************************************ 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:17.407 * Looking for test storage... 00:10:17.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:17.407 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.407 --rc genhtml_branch_coverage=1 00:10:17.407 --rc genhtml_function_coverage=1 00:10:17.407 --rc genhtml_legend=1 00:10:17.407 --rc geninfo_all_blocks=1 00:10:17.407 --rc geninfo_unexecuted_blocks=1 00:10:17.407 00:10:17.407 ' 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.407 --rc genhtml_branch_coverage=1 00:10:17.407 --rc genhtml_function_coverage=1 00:10:17.407 --rc genhtml_legend=1 00:10:17.407 --rc geninfo_all_blocks=1 00:10:17.407 --rc geninfo_unexecuted_blocks=1 00:10:17.407 00:10:17.407 ' 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.407 --rc genhtml_branch_coverage=1 00:10:17.407 --rc genhtml_function_coverage=1 00:10:17.407 --rc genhtml_legend=1 00:10:17.407 --rc geninfo_all_blocks=1 00:10:17.407 --rc geninfo_unexecuted_blocks=1 00:10:17.407 00:10:17.407 ' 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.407 --rc genhtml_branch_coverage=1 00:10:17.407 --rc genhtml_function_coverage=1 00:10:17.407 --rc genhtml_legend=1 00:10:17.407 --rc geninfo_all_blocks=1 00:10:17.407 --rc geninfo_unexecuted_blocks=1 00:10:17.407 00:10:17.407 ' 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:17.407 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:17.408 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:17.408 #define SPDK_CONFIG_H 00:10:17.408 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:17.408 #define SPDK_CONFIG_APPS 1 00:10:17.408 #define SPDK_CONFIG_ARCH native 00:10:17.408 #undef SPDK_CONFIG_ASAN 00:10:17.408 #undef SPDK_CONFIG_AVAHI 00:10:17.408 #undef SPDK_CONFIG_CET 00:10:17.408 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:17.408 #define SPDK_CONFIG_COVERAGE 1 00:10:17.408 #define SPDK_CONFIG_CROSS_PREFIX 00:10:17.408 #undef SPDK_CONFIG_CRYPTO 00:10:17.408 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:17.408 #undef SPDK_CONFIG_CUSTOMOCF 00:10:17.408 #undef SPDK_CONFIG_DAOS 00:10:17.408 #define SPDK_CONFIG_DAOS_DIR 00:10:17.408 #define SPDK_CONFIG_DEBUG 1 00:10:17.408 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:17.408 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:17.408 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:17.408 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:17.408 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:17.408 #undef SPDK_CONFIG_DPDK_UADK 00:10:17.408 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:17.408 #define SPDK_CONFIG_EXAMPLES 1 00:10:17.408 #undef SPDK_CONFIG_FC 00:10:17.408 #define SPDK_CONFIG_FC_PATH 00:10:17.408 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:17.408 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:17.408 #define SPDK_CONFIG_FSDEV 1 00:10:17.408 #undef SPDK_CONFIG_FUSE 00:10:17.408 #undef SPDK_CONFIG_FUZZER 00:10:17.408 #define SPDK_CONFIG_FUZZER_LIB 00:10:17.408 #undef SPDK_CONFIG_GOLANG 00:10:17.408 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:17.408 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:17.408 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:17.408 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:17.408 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:17.408 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:17.408 #undef SPDK_CONFIG_HAVE_LZ4 00:10:17.408 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:17.408 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:17.408 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:17.408 #define SPDK_CONFIG_IDXD 1 00:10:17.408 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:17.408 #undef SPDK_CONFIG_IPSEC_MB 00:10:17.408 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:17.408 #define SPDK_CONFIG_ISAL 1 00:10:17.408 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:17.408 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:17.408 #define SPDK_CONFIG_LIBDIR 00:10:17.408 #undef SPDK_CONFIG_LTO 00:10:17.408 #define SPDK_CONFIG_MAX_LCORES 128 00:10:17.408 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:17.408 #define SPDK_CONFIG_NVME_CUSE 1 00:10:17.408 #undef SPDK_CONFIG_OCF 00:10:17.408 #define SPDK_CONFIG_OCF_PATH 00:10:17.408 #define SPDK_CONFIG_OPENSSL_PATH 00:10:17.408 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:17.408 #define SPDK_CONFIG_PGO_DIR 00:10:17.408 #undef SPDK_CONFIG_PGO_USE 00:10:17.408 #define SPDK_CONFIG_PREFIX /usr/local 00:10:17.408 #undef SPDK_CONFIG_RAID5F 00:10:17.408 #undef SPDK_CONFIG_RBD 00:10:17.408 #define SPDK_CONFIG_RDMA 1 00:10:17.408 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:17.408 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:17.408 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:17.408 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:17.408 #define SPDK_CONFIG_SHARED 1 00:10:17.408 #undef SPDK_CONFIG_SMA 00:10:17.408 #define SPDK_CONFIG_TESTS 1 00:10:17.408 #undef SPDK_CONFIG_TSAN 00:10:17.408 #define SPDK_CONFIG_UBLK 1 00:10:17.408 #define SPDK_CONFIG_UBSAN 1 00:10:17.409 #undef SPDK_CONFIG_UNIT_TESTS 00:10:17.409 #undef SPDK_CONFIG_URING 00:10:17.409 #define SPDK_CONFIG_URING_PATH 00:10:17.409 #undef SPDK_CONFIG_URING_ZNS 00:10:17.409 #undef SPDK_CONFIG_USDT 00:10:17.409 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:17.409 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:17.409 #define SPDK_CONFIG_VFIO_USER 1 00:10:17.409 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:17.409 #define SPDK_CONFIG_VHOST 1 00:10:17.409 #define SPDK_CONFIG_VIRTIO 1 00:10:17.409 #undef SPDK_CONFIG_VTUNE 00:10:17.409 #define SPDK_CONFIG_VTUNE_DIR 00:10:17.409 #define SPDK_CONFIG_WERROR 1 00:10:17.409 #define SPDK_CONFIG_WPDK_DIR 00:10:17.409 #undef SPDK_CONFIG_XNVME 00:10:17.409 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:17.409 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:17.410 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 548011 ]] 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 548011 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.aBV9AA 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.aBV9AA/tests/target /tmp/spdk.aBV9AA 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:17.411 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=56127700992 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988528128 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5860827136 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30984232960 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375277568 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22429696 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993928192 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=335872 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:17.670 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:17.670 * Looking for test storage... 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=56127700992 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8075419648 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:17.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.671 --rc genhtml_branch_coverage=1 00:10:17.671 --rc genhtml_function_coverage=1 00:10:17.671 --rc genhtml_legend=1 00:10:17.671 --rc geninfo_all_blocks=1 00:10:17.671 --rc geninfo_unexecuted_blocks=1 00:10:17.671 00:10:17.671 ' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:17.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.671 --rc genhtml_branch_coverage=1 00:10:17.671 --rc genhtml_function_coverage=1 00:10:17.671 --rc genhtml_legend=1 00:10:17.671 --rc geninfo_all_blocks=1 00:10:17.671 --rc geninfo_unexecuted_blocks=1 00:10:17.671 00:10:17.671 ' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:17.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.671 --rc genhtml_branch_coverage=1 00:10:17.671 --rc genhtml_function_coverage=1 00:10:17.671 --rc genhtml_legend=1 00:10:17.671 --rc geninfo_all_blocks=1 00:10:17.671 --rc geninfo_unexecuted_blocks=1 00:10:17.671 00:10:17.671 ' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:17.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.671 --rc genhtml_branch_coverage=1 00:10:17.671 --rc genhtml_function_coverage=1 00:10:17.671 --rc genhtml_legend=1 00:10:17.671 --rc geninfo_all_blocks=1 00:10:17.671 --rc geninfo_unexecuted_blocks=1 00:10:17.671 00:10:17.671 ' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.671 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.672 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:20.207 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:20.207 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:20.207 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:20.207 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.207 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:10:20.207 00:10:20.207 --- 10.0.0.2 ping statistics --- 00:10:20.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.207 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:10:20.208 00:10:20.208 --- 10.0.0.1 ping statistics --- 00:10:20.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.208 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:20.208 ************************************ 00:10:20.208 START TEST nvmf_filesystem_no_in_capsule 00:10:20.208 ************************************ 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=549657 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 549657 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 549657 ']' 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.208 [2024-10-30 12:21:52.559647] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:10:20.208 [2024-10-30 12:21:52.559717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.208 [2024-10-30 12:21:52.630871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.208 [2024-10-30 12:21:52.688764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.208 [2024-10-30 12:21:52.688819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.208 [2024-10-30 12:21:52.688848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.208 [2024-10-30 12:21:52.688859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.208 [2024-10-30 12:21:52.688875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.208 [2024-10-30 12:21:52.690383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.208 [2024-10-30 12:21:52.690446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.208 [2024-10-30 12:21:52.690511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.208 [2024-10-30 12:21:52.690515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.208 [2024-10-30 12:21:52.846703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.208 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.466 Malloc1 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.466 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.467 [2024-10-30 12:21:53.039195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:20.467 { 00:10:20.467 "name": "Malloc1", 00:10:20.467 "aliases": [ 00:10:20.467 "5f542a61-6161-42c0-a9c8-926fec1bd8f6" 00:10:20.467 ], 00:10:20.467 "product_name": "Malloc disk", 00:10:20.467 "block_size": 512, 00:10:20.467 "num_blocks": 1048576, 00:10:20.467 "uuid": "5f542a61-6161-42c0-a9c8-926fec1bd8f6", 00:10:20.467 "assigned_rate_limits": { 00:10:20.467 "rw_ios_per_sec": 0, 00:10:20.467 "rw_mbytes_per_sec": 0, 00:10:20.467 "r_mbytes_per_sec": 0, 00:10:20.467 "w_mbytes_per_sec": 0 00:10:20.467 }, 00:10:20.467 "claimed": true, 00:10:20.467 "claim_type": "exclusive_write", 00:10:20.467 "zoned": false, 00:10:20.467 "supported_io_types": { 00:10:20.467 "read": true, 00:10:20.467 "write": true, 00:10:20.467 "unmap": true, 00:10:20.467 "flush": true, 00:10:20.467 "reset": true, 00:10:20.467 "nvme_admin": false, 00:10:20.467 "nvme_io": false, 00:10:20.467 "nvme_io_md": false, 00:10:20.467 "write_zeroes": true, 00:10:20.467 "zcopy": true, 00:10:20.467 "get_zone_info": false, 00:10:20.467 "zone_management": false, 00:10:20.467 "zone_append": false, 00:10:20.467 "compare": false, 00:10:20.467 "compare_and_write": false, 00:10:20.467 "abort": true, 00:10:20.467 "seek_hole": false, 00:10:20.467 "seek_data": false, 00:10:20.467 "copy": true, 00:10:20.467 "nvme_iov_md": false 00:10:20.467 }, 00:10:20.467 "memory_domains": [ 00:10:20.467 { 00:10:20.467 "dma_device_id": "system", 00:10:20.467 "dma_device_type": 1 00:10:20.467 }, 00:10:20.467 { 00:10:20.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.467 "dma_device_type": 2 00:10:20.467 } 00:10:20.467 ], 00:10:20.467 "driver_specific": {} 00:10:20.467 } 00:10:20.467 ]' 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:20.467 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.401 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:21.401 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:21.401 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.401 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:21.401 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:23.300 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:24.673 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:25.238 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.496 ************************************ 00:10:25.496 START TEST filesystem_ext4 00:10:25.496 ************************************ 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:25.496 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:25.496 mke2fs 1.47.0 (5-Feb-2023) 00:10:25.496 Discarding device blocks: 0/522240 done 00:10:25.496 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:25.497 Filesystem UUID: 5d863acb-f870-47ff-a6b1-59146d237b19 00:10:25.497 Superblock backups stored on blocks: 00:10:25.497 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:25.497 00:10:25.497 Allocating group tables: 0/64 done 00:10:25.497 Writing inode tables: 0/64 done 00:10:25.754 Creating journal (8192 blocks): done 00:10:27.945 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:10:27.945 00:10:27.945 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:27.945 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.503 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 549657 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.504 00:10:34.504 real 0m8.210s 00:10:34.504 user 0m0.007s 00:10:34.504 sys 0m0.078s 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:34.504 ************************************ 00:10:34.504 END TEST filesystem_ext4 00:10:34.504 ************************************ 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.504 ************************************ 00:10:34.504 START TEST filesystem_btrfs 00:10:34.504 ************************************ 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:34.504 btrfs-progs v6.8.1 00:10:34.504 See https://btrfs.readthedocs.io for more information. 00:10:34.504 00:10:34.504 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:34.504 NOTE: several default settings have changed in version 5.15, please make sure 00:10:34.504 this does not affect your deployments: 00:10:34.504 - DUP for metadata (-m dup) 00:10:34.504 - enabled no-holes (-O no-holes) 00:10:34.504 - enabled free-space-tree (-R free-space-tree) 00:10:34.504 00:10:34.504 Label: (null) 00:10:34.504 UUID: 466ba037-fda5-4b8c-bf37-e27aa97582ef 00:10:34.504 Node size: 16384 00:10:34.504 Sector size: 4096 (CPU page size: 4096) 00:10:34.504 Filesystem size: 510.00MiB 00:10:34.504 Block group profiles: 00:10:34.504 Data: single 8.00MiB 00:10:34.504 Metadata: DUP 32.00MiB 00:10:34.504 System: DUP 8.00MiB 00:10:34.504 SSD detected: yes 00:10:34.504 Zoned device: no 00:10:34.504 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:34.504 Checksum: crc32c 00:10:34.504 Number of devices: 1 00:10:34.504 Devices: 00:10:34.504 ID SIZE PATH 00:10:34.504 1 510.00MiB /dev/nvme0n1p1 00:10:34.504 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:34.504 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 549657 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.762 00:10:34.762 real 0m1.231s 00:10:34.762 user 0m0.025s 00:10:34.762 sys 0m0.110s 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:34.762 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.762 ************************************ 00:10:34.762 END TEST filesystem_btrfs 00:10:34.762 ************************************ 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.022 ************************************ 00:10:35.022 START TEST filesystem_xfs 00:10:35.022 ************************************ 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:35.022 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:35.022 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:35.022 = sectsz=512 attr=2, projid32bit=1 00:10:35.022 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:35.022 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:35.022 data = bsize=4096 blocks=130560, imaxpct=25 00:10:35.022 = sunit=0 swidth=0 blks 00:10:35.022 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:35.022 log =internal log bsize=4096 blocks=16384, version=2 00:10:35.022 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:35.022 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:36.045 Discarding blocks...Done. 00:10:36.045 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:36.045 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 549657 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.937 00:10:37.937 real 0m2.919s 00:10:37.937 user 0m0.018s 00:10:37.937 sys 0m0.060s 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.937 ************************************ 00:10:37.937 END TEST filesystem_xfs 00:10:37.937 ************************************ 00:10:37.937 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:38.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 549657 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 549657 ']' 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 549657 00:10:38.194 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:38.451 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:38.451 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 549657 00:10:38.451 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:38.451 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:38.451 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 549657' 00:10:38.451 killing process with pid 549657 00:10:38.451 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 549657 00:10:38.451 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 549657 00:10:38.708 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:38.708 00:10:38.708 real 0m18.855s 00:10:38.708 user 1m13.195s 00:10:38.708 sys 0m2.211s 00:10:38.708 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:38.708 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.708 ************************************ 00:10:38.708 END TEST nvmf_filesystem_no_in_capsule 00:10:38.708 ************************************ 00:10:38.708 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.967 ************************************ 00:10:38.967 START TEST nvmf_filesystem_in_capsule 00:10:38.967 ************************************ 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=552042 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 552042 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 552042 ']' 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:38.967 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.967 [2024-10-30 12:22:11.476926] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:10:38.967 [2024-10-30 12:22:11.477017] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.967 [2024-10-30 12:22:11.569836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.967 [2024-10-30 12:22:11.645055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.967 [2024-10-30 12:22:11.645131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.967 [2024-10-30 12:22:11.645171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.967 [2024-10-30 12:22:11.645193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.967 [2024-10-30 12:22:11.645212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.967 [2024-10-30 12:22:11.647323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.967 [2024-10-30 12:22:11.647386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.967 [2024-10-30 12:22:11.647516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.967 [2024-10-30 12:22:11.647527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.226 [2024-10-30 12:22:11.872206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.226 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.485 Malloc1 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.485 [2024-10-30 12:22:12.047936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:39.485 { 00:10:39.485 "name": "Malloc1", 00:10:39.485 "aliases": [ 00:10:39.485 "97f236cb-e919-4d80-b0d6-12f97f174fc9" 00:10:39.485 ], 00:10:39.485 "product_name": "Malloc disk", 00:10:39.485 "block_size": 512, 00:10:39.485 "num_blocks": 1048576, 00:10:39.485 "uuid": "97f236cb-e919-4d80-b0d6-12f97f174fc9", 00:10:39.485 "assigned_rate_limits": { 00:10:39.485 "rw_ios_per_sec": 0, 00:10:39.485 "rw_mbytes_per_sec": 0, 00:10:39.485 "r_mbytes_per_sec": 0, 00:10:39.485 "w_mbytes_per_sec": 0 00:10:39.485 }, 00:10:39.485 "claimed": true, 00:10:39.485 "claim_type": "exclusive_write", 00:10:39.485 "zoned": false, 00:10:39.485 "supported_io_types": { 00:10:39.485 "read": true, 00:10:39.485 "write": true, 00:10:39.485 "unmap": true, 00:10:39.485 "flush": true, 00:10:39.485 "reset": true, 00:10:39.485 "nvme_admin": false, 00:10:39.485 "nvme_io": false, 00:10:39.485 "nvme_io_md": false, 00:10:39.485 "write_zeroes": true, 00:10:39.485 "zcopy": true, 00:10:39.485 "get_zone_info": false, 00:10:39.485 "zone_management": false, 00:10:39.485 "zone_append": false, 00:10:39.485 "compare": false, 00:10:39.485 "compare_and_write": false, 00:10:39.485 "abort": true, 00:10:39.485 "seek_hole": false, 00:10:39.485 "seek_data": false, 00:10:39.485 "copy": true, 00:10:39.485 "nvme_iov_md": false 00:10:39.485 }, 00:10:39.485 "memory_domains": [ 00:10:39.485 { 00:10:39.485 "dma_device_id": "system", 00:10:39.485 "dma_device_type": 1 00:10:39.485 }, 00:10:39.485 { 00:10:39.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.485 "dma_device_type": 2 00:10:39.485 } 00:10:39.485 ], 00:10:39.485 "driver_specific": {} 00:10:39.485 } 00:10:39.485 ]' 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:39.485 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.421 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:40.421 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:40.421 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.421 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:40.421 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:42.318 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:42.576 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:43.508 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.440 ************************************ 00:10:44.440 START TEST filesystem_in_capsule_ext4 00:10:44.440 ************************************ 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:44.440 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:44.440 mke2fs 1.47.0 (5-Feb-2023) 00:10:44.440 Discarding device blocks: 0/522240 done 00:10:44.440 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:44.440 Filesystem UUID: 382c0e2a-1033-4228-86c5-547b319984e3 00:10:44.440 Superblock backups stored on blocks: 00:10:44.440 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:44.440 00:10:44.440 Allocating group tables: 0/64 done 00:10:44.440 Writing inode tables: 0/64 done 00:10:45.373 Creating journal (8192 blocks): done 00:10:45.373 Writing superblocks and filesystem accounting information: 0/64 done 00:10:45.373 00:10:45.373 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:45.373 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.632 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.632 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 552042 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.890 00:10:50.890 real 0m6.513s 00:10:50.890 user 0m0.022s 00:10:50.890 sys 0m0.061s 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:50.890 ************************************ 00:10:50.890 END TEST filesystem_in_capsule_ext4 00:10:50.890 ************************************ 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.890 ************************************ 00:10:50.890 START TEST filesystem_in_capsule_btrfs 00:10:50.890 ************************************ 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:50.890 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:51.148 btrfs-progs v6.8.1 00:10:51.148 See https://btrfs.readthedocs.io for more information. 00:10:51.148 00:10:51.148 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:51.148 NOTE: several default settings have changed in version 5.15, please make sure 00:10:51.148 this does not affect your deployments: 00:10:51.148 - DUP for metadata (-m dup) 00:10:51.148 - enabled no-holes (-O no-holes) 00:10:51.148 - enabled free-space-tree (-R free-space-tree) 00:10:51.148 00:10:51.148 Label: (null) 00:10:51.148 UUID: 444a4a06-8ae0-4e3e-83d6-1a15b30165b2 00:10:51.148 Node size: 16384 00:10:51.148 Sector size: 4096 (CPU page size: 4096) 00:10:51.148 Filesystem size: 510.00MiB 00:10:51.148 Block group profiles: 00:10:51.148 Data: single 8.00MiB 00:10:51.148 Metadata: DUP 32.00MiB 00:10:51.148 System: DUP 8.00MiB 00:10:51.148 SSD detected: yes 00:10:51.148 Zoned device: no 00:10:51.148 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:51.148 Checksum: crc32c 00:10:51.148 Number of devices: 1 00:10:51.148 Devices: 00:10:51.148 ID SIZE PATH 00:10:51.148 1 510.00MiB /dev/nvme0n1p1 00:10:51.148 00:10:51.148 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:51.148 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 552042 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:51.406 00:10:51.406 real 0m0.486s 00:10:51.406 user 0m0.007s 00:10:51.406 sys 0m0.101s 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:51.406 ************************************ 00:10:51.406 END TEST filesystem_in_capsule_btrfs 00:10:51.406 ************************************ 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.406 ************************************ 00:10:51.406 START TEST filesystem_in_capsule_xfs 00:10:51.406 ************************************ 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:51.406 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:51.406 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:51.406 = sectsz=512 attr=2, projid32bit=1 00:10:51.406 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:51.406 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:51.406 data = bsize=4096 blocks=130560, imaxpct=25 00:10:51.406 = sunit=0 swidth=0 blks 00:10:51.406 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:51.406 log =internal log bsize=4096 blocks=16384, version=2 00:10:51.406 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:51.406 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:52.780 Discarding blocks...Done. 00:10:52.780 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:52.780 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 552042 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:55.309 00:10:55.309 real 0m3.664s 00:10:55.309 user 0m0.024s 00:10:55.309 sys 0m0.048s 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:55.309 ************************************ 00:10:55.309 END TEST filesystem_in_capsule_xfs 00:10:55.309 ************************************ 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:55.309 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 552042 00:10:55.310 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 552042 ']' 00:10:55.310 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 552042 00:10:55.310 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:55.310 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:55.310 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 552042 00:10:55.310 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:55.310 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:55.310 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 552042' 00:10:55.310 killing process with pid 552042 00:10:55.310 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 552042 00:10:55.310 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 552042 00:10:55.569 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:55.569 00:10:55.569 real 0m16.828s 00:10:55.569 user 1m5.231s 00:10:55.569 sys 0m2.046s 00:10:55.569 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.569 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.569 ************************************ 00:10:55.569 END TEST nvmf_filesystem_in_capsule 00:10:55.569 ************************************ 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.829 rmmod nvme_tcp 00:10:55.829 rmmod nvme_fabrics 00:10:55.829 rmmod nvme_keyring 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.829 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.736 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.736 00:10:57.736 real 0m40.512s 00:10:57.736 user 2m19.493s 00:10:57.736 sys 0m6.033s 00:10:57.736 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.736 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:57.736 ************************************ 00:10:57.736 END TEST nvmf_filesystem 00:10:57.736 ************************************ 00:10:57.736 12:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:57.736 12:22:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:57.736 12:22:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.736 12:22:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.996 ************************************ 00:10:57.996 START TEST nvmf_target_discovery 00:10:57.996 ************************************ 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:57.996 * Looking for test storage... 00:10:57.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:57.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.996 --rc genhtml_branch_coverage=1 00:10:57.996 --rc genhtml_function_coverage=1 00:10:57.996 --rc genhtml_legend=1 00:10:57.996 --rc geninfo_all_blocks=1 00:10:57.996 --rc geninfo_unexecuted_blocks=1 00:10:57.996 00:10:57.996 ' 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:57.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.996 --rc genhtml_branch_coverage=1 00:10:57.996 --rc genhtml_function_coverage=1 00:10:57.996 --rc genhtml_legend=1 00:10:57.996 --rc geninfo_all_blocks=1 00:10:57.996 --rc geninfo_unexecuted_blocks=1 00:10:57.996 00:10:57.996 ' 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:57.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.996 --rc genhtml_branch_coverage=1 00:10:57.996 --rc genhtml_function_coverage=1 00:10:57.996 --rc genhtml_legend=1 00:10:57.996 --rc geninfo_all_blocks=1 00:10:57.996 --rc geninfo_unexecuted_blocks=1 00:10:57.996 00:10:57.996 ' 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:57.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.996 --rc genhtml_branch_coverage=1 00:10:57.996 --rc genhtml_function_coverage=1 00:10:57.996 --rc genhtml_legend=1 00:10:57.996 --rc geninfo_all_blocks=1 00:10:57.996 --rc geninfo_unexecuted_blocks=1 00:10:57.996 00:10:57.996 ' 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:57.996 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.997 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:00.533 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:00.533 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.533 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:00.534 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:00.534 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:11:00.534 00:11:00.534 --- 10.0.0.2 ping statistics --- 00:11:00.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.534 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:11:00.534 00:11:00.534 --- 10.0.0.1 ping statistics --- 00:11:00.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.534 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.534 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=556187 00:11:00.534 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.534 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 556187 00:11:00.534 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 556187 ']' 00:11:00.534 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.534 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:00.534 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.534 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:00.534 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.534 [2024-10-30 12:22:33.049111] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:11:00.534 [2024-10-30 12:22:33.049182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.534 [2024-10-30 12:22:33.119121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.534 [2024-10-30 12:22:33.177147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.534 [2024-10-30 12:22:33.177198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.534 [2024-10-30 12:22:33.177225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.534 [2024-10-30 12:22:33.177237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.534 [2024-10-30 12:22:33.177288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.534 [2024-10-30 12:22:33.178924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.534 [2024-10-30 12:22:33.179096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.534 [2024-10-30 12:22:33.179169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.534 [2024-10-30 12:22:33.179172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.793 [2024-10-30 12:22:33.328851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.793 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.793 Null1 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 [2024-10-30 12:22:33.369184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 Null2 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 Null3 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 Null4 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.794 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.053 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.053 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:01.053 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.053 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.053 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.053 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:01.053 00:11:01.053 Discovery Log Number of Records 6, Generation counter 6 00:11:01.053 =====Discovery Log Entry 0====== 00:11:01.053 trtype: tcp 00:11:01.053 adrfam: ipv4 00:11:01.053 subtype: current discovery subsystem 00:11:01.053 treq: not required 00:11:01.053 portid: 0 00:11:01.053 trsvcid: 4420 00:11:01.053 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:01.053 traddr: 10.0.0.2 00:11:01.053 eflags: explicit discovery connections, duplicate discovery information 00:11:01.053 sectype: none 00:11:01.053 =====Discovery Log Entry 1====== 00:11:01.053 trtype: tcp 00:11:01.053 adrfam: ipv4 00:11:01.053 subtype: nvme subsystem 00:11:01.053 treq: not required 00:11:01.053 portid: 0 00:11:01.053 trsvcid: 4420 00:11:01.053 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:01.053 traddr: 10.0.0.2 00:11:01.053 eflags: none 00:11:01.053 sectype: none 00:11:01.053 =====Discovery Log Entry 2====== 00:11:01.053 trtype: tcp 00:11:01.053 adrfam: ipv4 00:11:01.053 subtype: nvme subsystem 00:11:01.053 treq: not required 00:11:01.053 portid: 0 00:11:01.053 trsvcid: 4420 00:11:01.053 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:01.053 traddr: 10.0.0.2 00:11:01.053 eflags: none 00:11:01.053 sectype: none 00:11:01.053 =====Discovery Log Entry 3====== 00:11:01.053 trtype: tcp 00:11:01.053 adrfam: ipv4 00:11:01.053 subtype: nvme subsystem 00:11:01.053 treq: not required 00:11:01.053 portid: 0 00:11:01.053 trsvcid: 4420 00:11:01.053 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:01.053 traddr: 10.0.0.2 00:11:01.053 eflags: none 00:11:01.053 sectype: none 00:11:01.053 =====Discovery Log Entry 4====== 00:11:01.053 trtype: tcp 00:11:01.053 adrfam: ipv4 00:11:01.053 subtype: nvme subsystem 00:11:01.053 treq: not required 00:11:01.053 portid: 0 00:11:01.053 trsvcid: 4420 00:11:01.053 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:01.053 traddr: 10.0.0.2 00:11:01.053 eflags: none 00:11:01.053 sectype: none 00:11:01.053 =====Discovery Log Entry 5====== 00:11:01.053 trtype: tcp 00:11:01.053 adrfam: ipv4 00:11:01.053 subtype: discovery subsystem referral 00:11:01.053 treq: not required 00:11:01.053 portid: 0 00:11:01.053 trsvcid: 4430 00:11:01.053 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:01.053 traddr: 10.0.0.2 00:11:01.053 eflags: none 00:11:01.053 sectype: none 00:11:01.053 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:01.053 Perform nvmf subsystem discovery via RPC 00:11:01.053 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:01.053 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.053 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.053 [ 00:11:01.053 { 00:11:01.053 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:01.053 "subtype": "Discovery", 00:11:01.053 "listen_addresses": [ 00:11:01.053 { 00:11:01.053 "trtype": "TCP", 00:11:01.053 "adrfam": "IPv4", 00:11:01.054 "traddr": "10.0.0.2", 00:11:01.054 "trsvcid": "4420" 00:11:01.054 } 00:11:01.054 ], 00:11:01.054 "allow_any_host": true, 00:11:01.054 "hosts": [] 00:11:01.054 }, 00:11:01.054 { 00:11:01.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:01.054 "subtype": "NVMe", 00:11:01.054 "listen_addresses": [ 00:11:01.054 { 00:11:01.054 "trtype": "TCP", 00:11:01.054 "adrfam": "IPv4", 00:11:01.054 "traddr": "10.0.0.2", 00:11:01.054 "trsvcid": "4420" 00:11:01.054 } 00:11:01.054 ], 00:11:01.054 "allow_any_host": true, 00:11:01.054 "hosts": [], 00:11:01.054 "serial_number": "SPDK00000000000001", 00:11:01.054 "model_number": "SPDK bdev Controller", 00:11:01.054 "max_namespaces": 32, 00:11:01.054 "min_cntlid": 1, 00:11:01.054 "max_cntlid": 65519, 00:11:01.054 "namespaces": [ 00:11:01.054 { 00:11:01.054 "nsid": 1, 00:11:01.054 "bdev_name": "Null1", 00:11:01.054 "name": "Null1", 00:11:01.054 "nguid": "430A7AB9CC9241D49B65F17742DFFC61", 00:11:01.054 "uuid": "430a7ab9-cc92-41d4-9b65-f17742dffc61" 00:11:01.054 } 00:11:01.054 ] 00:11:01.054 }, 00:11:01.054 { 00:11:01.054 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:01.054 "subtype": "NVMe", 00:11:01.054 "listen_addresses": [ 00:11:01.054 { 00:11:01.054 "trtype": "TCP", 00:11:01.054 "adrfam": "IPv4", 00:11:01.054 "traddr": "10.0.0.2", 00:11:01.054 "trsvcid": "4420" 00:11:01.054 } 00:11:01.054 ], 00:11:01.054 "allow_any_host": true, 00:11:01.054 "hosts": [], 00:11:01.054 "serial_number": "SPDK00000000000002", 00:11:01.054 "model_number": "SPDK bdev Controller", 00:11:01.054 "max_namespaces": 32, 00:11:01.054 "min_cntlid": 1, 00:11:01.054 "max_cntlid": 65519, 00:11:01.054 "namespaces": [ 00:11:01.054 { 00:11:01.054 "nsid": 1, 00:11:01.054 "bdev_name": "Null2", 00:11:01.054 "name": "Null2", 00:11:01.054 "nguid": "090C5C0EE5EA4719925A5F902AB0481F", 00:11:01.054 "uuid": "090c5c0e-e5ea-4719-925a-5f902ab0481f" 00:11:01.054 } 00:11:01.054 ] 00:11:01.054 }, 00:11:01.054 { 00:11:01.054 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:01.054 "subtype": "NVMe", 00:11:01.054 "listen_addresses": [ 00:11:01.054 { 00:11:01.054 "trtype": "TCP", 00:11:01.054 "adrfam": "IPv4", 00:11:01.054 "traddr": "10.0.0.2", 00:11:01.054 "trsvcid": "4420" 00:11:01.054 } 00:11:01.054 ], 00:11:01.054 "allow_any_host": true, 00:11:01.054 "hosts": [], 00:11:01.054 "serial_number": "SPDK00000000000003", 00:11:01.054 "model_number": "SPDK bdev Controller", 00:11:01.054 "max_namespaces": 32, 00:11:01.054 "min_cntlid": 1, 00:11:01.054 "max_cntlid": 65519, 00:11:01.054 "namespaces": [ 00:11:01.054 { 00:11:01.054 "nsid": 1, 00:11:01.054 "bdev_name": "Null3", 00:11:01.054 "name": "Null3", 00:11:01.054 "nguid": "217C0E8A3EDC44EF91644F50D0CB78FA", 00:11:01.054 "uuid": "217c0e8a-3edc-44ef-9164-4f50d0cb78fa" 00:11:01.054 } 00:11:01.054 ] 00:11:01.054 }, 00:11:01.054 { 00:11:01.054 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:01.054 "subtype": "NVMe", 00:11:01.054 "listen_addresses": [ 00:11:01.054 { 00:11:01.054 "trtype": "TCP", 00:11:01.054 "adrfam": "IPv4", 00:11:01.054 "traddr": "10.0.0.2", 00:11:01.054 "trsvcid": "4420" 00:11:01.054 } 00:11:01.054 ], 00:11:01.054 "allow_any_host": true, 00:11:01.054 "hosts": [], 00:11:01.054 "serial_number": "SPDK00000000000004", 00:11:01.054 "model_number": "SPDK bdev Controller", 00:11:01.054 "max_namespaces": 32, 00:11:01.054 "min_cntlid": 1, 00:11:01.054 "max_cntlid": 65519, 00:11:01.054 "namespaces": [ 00:11:01.054 { 00:11:01.054 "nsid": 1, 00:11:01.054 "bdev_name": "Null4", 00:11:01.054 "name": "Null4", 00:11:01.054 "nguid": "8F56CD4F341047939E55AEA3655CB631", 00:11:01.054 "uuid": "8f56cd4f-3410-4793-9e55-aea3655cb631" 00:11:01.054 } 00:11:01.054 ] 00:11:01.054 } 00:11:01.054 ] 00:11:01.054 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.054 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:01.054 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:01.054 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.054 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.054 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.054 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.054 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:01.054 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.054 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.314 rmmod nvme_tcp 00:11:01.314 rmmod nvme_fabrics 00:11:01.314 rmmod nvme_keyring 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 556187 ']' 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 556187 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 556187 ']' 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 556187 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 556187 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 556187' 00:11:01.314 killing process with pid 556187 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 556187 00:11:01.314 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 556187 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.579 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.117 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:04.117 00:11:04.117 real 0m5.796s 00:11:04.117 user 0m5.023s 00:11:04.117 sys 0m2.011s 00:11:04.117 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:04.117 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:04.117 ************************************ 00:11:04.117 END TEST nvmf_target_discovery 00:11:04.117 ************************************ 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:04.118 ************************************ 00:11:04.118 START TEST nvmf_referrals 00:11:04.118 ************************************ 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:04.118 * Looking for test storage... 00:11:04.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:04.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.118 --rc genhtml_branch_coverage=1 00:11:04.118 --rc genhtml_function_coverage=1 00:11:04.118 --rc genhtml_legend=1 00:11:04.118 --rc geninfo_all_blocks=1 00:11:04.118 --rc geninfo_unexecuted_blocks=1 00:11:04.118 00:11:04.118 ' 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:04.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.118 --rc genhtml_branch_coverage=1 00:11:04.118 --rc genhtml_function_coverage=1 00:11:04.118 --rc genhtml_legend=1 00:11:04.118 --rc geninfo_all_blocks=1 00:11:04.118 --rc geninfo_unexecuted_blocks=1 00:11:04.118 00:11:04.118 ' 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:04.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.118 --rc genhtml_branch_coverage=1 00:11:04.118 --rc genhtml_function_coverage=1 00:11:04.118 --rc genhtml_legend=1 00:11:04.118 --rc geninfo_all_blocks=1 00:11:04.118 --rc geninfo_unexecuted_blocks=1 00:11:04.118 00:11:04.118 ' 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:04.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.118 --rc genhtml_branch_coverage=1 00:11:04.118 --rc genhtml_function_coverage=1 00:11:04.118 --rc genhtml_legend=1 00:11:04.118 --rc geninfo_all_blocks=1 00:11:04.118 --rc geninfo_unexecuted_blocks=1 00:11:04.118 00:11:04.118 ' 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.118 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.119 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:06.022 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:06.022 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:06.022 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.022 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:06.023 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:06.023 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:06.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:11:06.282 00:11:06.282 --- 10.0.0.2 ping statistics --- 00:11:06.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.282 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:11:06.282 00:11:06.282 --- 10.0.0.1 ping statistics --- 00:11:06.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.282 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=558280 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 558280 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 558280 ']' 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:06.282 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.282 [2024-10-30 12:22:38.846668] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:11:06.282 [2024-10-30 12:22:38.846762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.282 [2024-10-30 12:22:38.920605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.541 [2024-10-30 12:22:38.981977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.541 [2024-10-30 12:22:38.982030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.541 [2024-10-30 12:22:38.982058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.541 [2024-10-30 12:22:38.982069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.542 [2024-10-30 12:22:38.982079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.542 [2024-10-30 12:22:38.983728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.542 [2024-10-30 12:22:38.983792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.542 [2024-10-30 12:22:38.983856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.542 [2024-10-30 12:22:38.983859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.542 [2024-10-30 12:22:39.137087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.542 [2024-10-30 12:22:39.149377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.542 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:06.800 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.800 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:06.800 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:06.800 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:06.800 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.800 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.800 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.800 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.800 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.800 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.801 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.058 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.059 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.059 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:07.059 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.059 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.059 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.059 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.059 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.059 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:07.059 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.315 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:07.572 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:07.573 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:07.573 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:07.573 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:07.573 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.573 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.830 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:08.094 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:08.094 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:08.094 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:08.094 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:08.094 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.094 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.352 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.610 rmmod nvme_tcp 00:11:08.610 rmmod nvme_fabrics 00:11:08.610 rmmod nvme_keyring 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 558280 ']' 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 558280 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 558280 ']' 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 558280 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 558280 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 558280' 00:11:08.610 killing process with pid 558280 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 558280 00:11:08.610 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 558280 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.868 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.408 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.409 00:11:11.409 real 0m7.171s 00:11:11.409 user 0m11.231s 00:11:11.409 sys 0m2.316s 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:11.409 ************************************ 00:11:11.409 END TEST nvmf_referrals 00:11:11.409 ************************************ 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.409 ************************************ 00:11:11.409 START TEST nvmf_connect_disconnect 00:11:11.409 ************************************ 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:11.409 * Looking for test storage... 00:11:11.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:11.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.409 --rc genhtml_branch_coverage=1 00:11:11.409 --rc genhtml_function_coverage=1 00:11:11.409 --rc genhtml_legend=1 00:11:11.409 --rc geninfo_all_blocks=1 00:11:11.409 --rc geninfo_unexecuted_blocks=1 00:11:11.409 00:11:11.409 ' 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:11.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.409 --rc genhtml_branch_coverage=1 00:11:11.409 --rc genhtml_function_coverage=1 00:11:11.409 --rc genhtml_legend=1 00:11:11.409 --rc geninfo_all_blocks=1 00:11:11.409 --rc geninfo_unexecuted_blocks=1 00:11:11.409 00:11:11.409 ' 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:11.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.409 --rc genhtml_branch_coverage=1 00:11:11.409 --rc genhtml_function_coverage=1 00:11:11.409 --rc genhtml_legend=1 00:11:11.409 --rc geninfo_all_blocks=1 00:11:11.409 --rc geninfo_unexecuted_blocks=1 00:11:11.409 00:11:11.409 ' 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:11.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.409 --rc genhtml_branch_coverage=1 00:11:11.409 --rc genhtml_function_coverage=1 00:11:11.409 --rc genhtml_legend=1 00:11:11.409 --rc geninfo_all_blocks=1 00:11:11.409 --rc geninfo_unexecuted_blocks=1 00:11:11.409 00:11:11.409 ' 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.409 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.316 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.316 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.316 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.316 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.316 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.316 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.316 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:13.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:13.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:13.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:13.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:13.317 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:13.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:11:13.577 00:11:13.577 --- 10.0.0.2 ping statistics --- 00:11:13.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.577 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:11:13.577 00:11:13.577 --- 10.0.0.1 ping statistics --- 00:11:13.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.577 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=560703 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 560703 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 560703 ']' 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:13.577 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.577 [2024-10-30 12:22:46.109718] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:11:13.577 [2024-10-30 12:22:46.109792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.577 [2024-10-30 12:22:46.184423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.577 [2024-10-30 12:22:46.244395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.577 [2024-10-30 12:22:46.244444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.577 [2024-10-30 12:22:46.244471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.577 [2024-10-30 12:22:46.244483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.577 [2024-10-30 12:22:46.244492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.577 [2024-10-30 12:22:46.245974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.577 [2024-10-30 12:22:46.246032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.577 [2024-10-30 12:22:46.246054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.577 [2024-10-30 12:22:46.246057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.836 [2024-10-30 12:22:46.396027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.836 [2024-10-30 12:22:46.469768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:13.836 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:17.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.984 rmmod nvme_tcp 00:11:27.984 rmmod nvme_fabrics 00:11:27.984 rmmod nvme_keyring 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 560703 ']' 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 560703 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 560703 ']' 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 560703 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 560703 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 560703' 00:11:27.984 killing process with pid 560703 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 560703 00:11:27.984 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 560703 00:11:28.244 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.245 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.156 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.156 00:11:30.156 real 0m19.230s 00:11:30.156 user 0m57.498s 00:11:30.156 sys 0m3.470s 00:11:30.156 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.156 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:30.156 ************************************ 00:11:30.156 END TEST nvmf_connect_disconnect 00:11:30.156 ************************************ 00:11:30.156 12:23:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:30.156 12:23:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:30.156 12:23:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.156 12:23:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.156 ************************************ 00:11:30.156 START TEST nvmf_multitarget 00:11:30.156 ************************************ 00:11:30.156 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:30.415 * Looking for test storage... 00:11:30.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:30.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.415 --rc genhtml_branch_coverage=1 00:11:30.415 --rc genhtml_function_coverage=1 00:11:30.415 --rc genhtml_legend=1 00:11:30.415 --rc geninfo_all_blocks=1 00:11:30.415 --rc geninfo_unexecuted_blocks=1 00:11:30.415 00:11:30.415 ' 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:30.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.415 --rc genhtml_branch_coverage=1 00:11:30.415 --rc genhtml_function_coverage=1 00:11:30.415 --rc genhtml_legend=1 00:11:30.415 --rc geninfo_all_blocks=1 00:11:30.415 --rc geninfo_unexecuted_blocks=1 00:11:30.415 00:11:30.415 ' 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:30.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.415 --rc genhtml_branch_coverage=1 00:11:30.415 --rc genhtml_function_coverage=1 00:11:30.415 --rc genhtml_legend=1 00:11:30.415 --rc geninfo_all_blocks=1 00:11:30.415 --rc geninfo_unexecuted_blocks=1 00:11:30.415 00:11:30.415 ' 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:30.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.415 --rc genhtml_branch_coverage=1 00:11:30.415 --rc genhtml_function_coverage=1 00:11:30.415 --rc genhtml_legend=1 00:11:30.415 --rc geninfo_all_blocks=1 00:11:30.415 --rc geninfo_unexecuted_blocks=1 00:11:30.415 00:11:30.415 ' 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.415 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:30.416 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.952 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:32.953 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:32.953 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:32.953 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:32.953 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:11:32.953 00:11:32.953 --- 10.0.0.2 ping statistics --- 00:11:32.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.953 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:11:32.953 00:11:32.953 --- 10.0.0.1 ping statistics --- 00:11:32.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.953 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=564457 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 564457 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 564457 ']' 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:32.953 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.953 [2024-10-30 12:23:05.281887] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:11:32.953 [2024-10-30 12:23:05.281958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.954 [2024-10-30 12:23:05.349758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.954 [2024-10-30 12:23:05.403931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.954 [2024-10-30 12:23:05.403986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.954 [2024-10-30 12:23:05.404021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.954 [2024-10-30 12:23:05.404033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.954 [2024-10-30 12:23:05.404043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.954 [2024-10-30 12:23:05.405707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.954 [2024-10-30 12:23:05.405767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.954 [2024-10-30 12:23:05.407275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.954 [2024-10-30 12:23:05.407280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.954 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:32.954 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:11:32.954 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.954 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.954 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.954 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.954 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:32.954 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:32.954 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:33.211 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:33.212 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:33.212 "nvmf_tgt_1" 00:11:33.212 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:33.212 "nvmf_tgt_2" 00:11:33.469 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:33.469 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:33.469 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:33.469 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:33.469 true 00:11:33.469 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:33.728 true 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.728 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.728 rmmod nvme_tcp 00:11:33.728 rmmod nvme_fabrics 00:11:33.728 rmmod nvme_keyring 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 564457 ']' 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 564457 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 564457 ']' 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 564457 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 564457 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 564457' 00:11:33.986 killing process with pid 564457 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 564457 00:11:33.986 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 564457 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.246 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.155 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:36.155 00:11:36.155 real 0m5.948s 00:11:36.155 user 0m6.730s 00:11:36.155 sys 0m2.038s 00:11:36.155 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:36.155 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:36.155 ************************************ 00:11:36.155 END TEST nvmf_multitarget 00:11:36.155 ************************************ 00:11:36.155 12:23:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:36.155 12:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:36.155 12:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.155 12:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.155 ************************************ 00:11:36.155 START TEST nvmf_rpc 00:11:36.155 ************************************ 00:11:36.155 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:36.415 * Looking for test storage... 00:11:36.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.415 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:36.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.416 --rc genhtml_branch_coverage=1 00:11:36.416 --rc genhtml_function_coverage=1 00:11:36.416 --rc genhtml_legend=1 00:11:36.416 --rc geninfo_all_blocks=1 00:11:36.416 --rc geninfo_unexecuted_blocks=1 00:11:36.416 00:11:36.416 ' 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:36.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.416 --rc genhtml_branch_coverage=1 00:11:36.416 --rc genhtml_function_coverage=1 00:11:36.416 --rc genhtml_legend=1 00:11:36.416 --rc geninfo_all_blocks=1 00:11:36.416 --rc geninfo_unexecuted_blocks=1 00:11:36.416 00:11:36.416 ' 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:36.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.416 --rc genhtml_branch_coverage=1 00:11:36.416 --rc genhtml_function_coverage=1 00:11:36.416 --rc genhtml_legend=1 00:11:36.416 --rc geninfo_all_blocks=1 00:11:36.416 --rc geninfo_unexecuted_blocks=1 00:11:36.416 00:11:36.416 ' 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:36.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.416 --rc genhtml_branch_coverage=1 00:11:36.416 --rc genhtml_function_coverage=1 00:11:36.416 --rc genhtml_legend=1 00:11:36.416 --rc geninfo_all_blocks=1 00:11:36.416 --rc geninfo_unexecuted_blocks=1 00:11:36.416 00:11:36.416 ' 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.416 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.950 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:38.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:38.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:38.951 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:38.951 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:11:38.951 00:11:38.951 --- 10.0.0.2 ping statistics --- 00:11:38.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.951 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:38.951 00:11:38.951 --- 10.0.0.1 ping statistics --- 00:11:38.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.951 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=566578 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 566578 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 566578 ']' 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:38.951 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.951 [2024-10-30 12:23:11.440048] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:11:38.951 [2024-10-30 12:23:11.440120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.952 [2024-10-30 12:23:11.513639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.952 [2024-10-30 12:23:11.573156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.952 [2024-10-30 12:23:11.573207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.952 [2024-10-30 12:23:11.573236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.952 [2024-10-30 12:23:11.573254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.952 [2024-10-30 12:23:11.573271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.952 [2024-10-30 12:23:11.574919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.952 [2024-10-30 12:23:11.574943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.952 [2024-10-30 12:23:11.574996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.952 [2024-10-30 12:23:11.574999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:39.209 "tick_rate": 2700000000, 00:11:39.209 "poll_groups": [ 00:11:39.209 { 00:11:39.209 "name": "nvmf_tgt_poll_group_000", 00:11:39.209 "admin_qpairs": 0, 00:11:39.209 "io_qpairs": 0, 00:11:39.209 "current_admin_qpairs": 0, 00:11:39.209 "current_io_qpairs": 0, 00:11:39.209 "pending_bdev_io": 0, 00:11:39.209 "completed_nvme_io": 0, 00:11:39.209 "transports": [] 00:11:39.209 }, 00:11:39.209 { 00:11:39.209 "name": "nvmf_tgt_poll_group_001", 00:11:39.209 "admin_qpairs": 0, 00:11:39.209 "io_qpairs": 0, 00:11:39.209 "current_admin_qpairs": 0, 00:11:39.209 "current_io_qpairs": 0, 00:11:39.209 "pending_bdev_io": 0, 00:11:39.209 "completed_nvme_io": 0, 00:11:39.209 "transports": [] 00:11:39.209 }, 00:11:39.209 { 00:11:39.209 "name": "nvmf_tgt_poll_group_002", 00:11:39.209 "admin_qpairs": 0, 00:11:39.209 "io_qpairs": 0, 00:11:39.209 "current_admin_qpairs": 0, 00:11:39.209 "current_io_qpairs": 0, 00:11:39.209 "pending_bdev_io": 0, 00:11:39.209 "completed_nvme_io": 0, 00:11:39.209 "transports": [] 00:11:39.209 }, 00:11:39.209 { 00:11:39.209 "name": "nvmf_tgt_poll_group_003", 00:11:39.209 "admin_qpairs": 0, 00:11:39.209 "io_qpairs": 0, 00:11:39.209 "current_admin_qpairs": 0, 00:11:39.209 "current_io_qpairs": 0, 00:11:39.209 "pending_bdev_io": 0, 00:11:39.209 "completed_nvme_io": 0, 00:11:39.209 "transports": [] 00:11:39.209 } 00:11:39.209 ] 00:11:39.209 }' 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:39.209 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.210 [2024-10-30 12:23:11.826716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:39.210 "tick_rate": 2700000000, 00:11:39.210 "poll_groups": [ 00:11:39.210 { 00:11:39.210 "name": "nvmf_tgt_poll_group_000", 00:11:39.210 "admin_qpairs": 0, 00:11:39.210 "io_qpairs": 0, 00:11:39.210 "current_admin_qpairs": 0, 00:11:39.210 "current_io_qpairs": 0, 00:11:39.210 "pending_bdev_io": 0, 00:11:39.210 "completed_nvme_io": 0, 00:11:39.210 "transports": [ 00:11:39.210 { 00:11:39.210 "trtype": "TCP" 00:11:39.210 } 00:11:39.210 ] 00:11:39.210 }, 00:11:39.210 { 00:11:39.210 "name": "nvmf_tgt_poll_group_001", 00:11:39.210 "admin_qpairs": 0, 00:11:39.210 "io_qpairs": 0, 00:11:39.210 "current_admin_qpairs": 0, 00:11:39.210 "current_io_qpairs": 0, 00:11:39.210 "pending_bdev_io": 0, 00:11:39.210 "completed_nvme_io": 0, 00:11:39.210 "transports": [ 00:11:39.210 { 00:11:39.210 "trtype": "TCP" 00:11:39.210 } 00:11:39.210 ] 00:11:39.210 }, 00:11:39.210 { 00:11:39.210 "name": "nvmf_tgt_poll_group_002", 00:11:39.210 "admin_qpairs": 0, 00:11:39.210 "io_qpairs": 0, 00:11:39.210 "current_admin_qpairs": 0, 00:11:39.210 "current_io_qpairs": 0, 00:11:39.210 "pending_bdev_io": 0, 00:11:39.210 "completed_nvme_io": 0, 00:11:39.210 "transports": [ 00:11:39.210 { 00:11:39.210 "trtype": "TCP" 00:11:39.210 } 00:11:39.210 ] 00:11:39.210 }, 00:11:39.210 { 00:11:39.210 "name": "nvmf_tgt_poll_group_003", 00:11:39.210 "admin_qpairs": 0, 00:11:39.210 "io_qpairs": 0, 00:11:39.210 "current_admin_qpairs": 0, 00:11:39.210 "current_io_qpairs": 0, 00:11:39.210 "pending_bdev_io": 0, 00:11:39.210 "completed_nvme_io": 0, 00:11:39.210 "transports": [ 00:11:39.210 { 00:11:39.210 "trtype": "TCP" 00:11:39.210 } 00:11:39.210 ] 00:11:39.210 } 00:11:39.210 ] 00:11:39.210 }' 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:39.210 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.468 Malloc1 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.468 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.468 [2024-10-30 12:23:12.003114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:39.468 [2024-10-30 12:23:12.025685] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:39.468 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:39.468 could not add new controller: failed to write to nvme-fabrics device 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.468 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.413 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.413 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:40.413 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.413 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:40.413 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.430 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.431 [2024-10-30 12:23:14.866645] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:42.431 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:42.431 could not add new controller: failed to write to nvme-fabrics device 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.431 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.997 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.997 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:42.997 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.997 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:42.997 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.525 [2024-10-30 12:23:17.712291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.525 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.783 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.783 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:45.783 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.783 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:45.783 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:47.682 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:47.682 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:47.682 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.682 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:47.682 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.682 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:47.682 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.941 [2024-10-30 12:23:20.486076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.941 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.508 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.508 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:48.508 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.508 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:48.508 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.038 [2024-10-30 12:23:23.257985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.038 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.296 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.296 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:51.296 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.296 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:51.296 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:53.822 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:53.822 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:53.822 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.822 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:53.822 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.822 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:53.822 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 [2024-10-30 12:23:26.076120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.822 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.081 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.081 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:54.081 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.081 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:54.081 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.609 [2024-10-30 12:23:28.848836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.609 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.867 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:56.867 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:56.867 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.867 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:56.867 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.394 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 [2024-10-30 12:23:31.705985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 [2024-10-30 12:23:31.754067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 [2024-10-30 12:23:31.802224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.395 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 [2024-10-30 12:23:31.850410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 [2024-10-30 12:23:31.898587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:59.396 "tick_rate": 2700000000, 00:11:59.396 "poll_groups": [ 00:11:59.396 { 00:11:59.396 "name": "nvmf_tgt_poll_group_000", 00:11:59.396 "admin_qpairs": 2, 00:11:59.396 "io_qpairs": 84, 00:11:59.396 "current_admin_qpairs": 0, 00:11:59.396 "current_io_qpairs": 0, 00:11:59.396 "pending_bdev_io": 0, 00:11:59.396 "completed_nvme_io": 232, 00:11:59.396 "transports": [ 00:11:59.396 { 00:11:59.396 "trtype": "TCP" 00:11:59.396 } 00:11:59.396 ] 00:11:59.396 }, 00:11:59.396 { 00:11:59.396 "name": "nvmf_tgt_poll_group_001", 00:11:59.396 "admin_qpairs": 2, 00:11:59.396 "io_qpairs": 84, 00:11:59.396 "current_admin_qpairs": 0, 00:11:59.396 "current_io_qpairs": 0, 00:11:59.396 "pending_bdev_io": 0, 00:11:59.396 "completed_nvme_io": 177, 00:11:59.396 "transports": [ 00:11:59.396 { 00:11:59.396 "trtype": "TCP" 00:11:59.396 } 00:11:59.396 ] 00:11:59.396 }, 00:11:59.396 { 00:11:59.396 "name": "nvmf_tgt_poll_group_002", 00:11:59.396 "admin_qpairs": 1, 00:11:59.396 "io_qpairs": 84, 00:11:59.396 "current_admin_qpairs": 0, 00:11:59.396 "current_io_qpairs": 0, 00:11:59.396 "pending_bdev_io": 0, 00:11:59.396 "completed_nvme_io": 190, 00:11:59.396 "transports": [ 00:11:59.396 { 00:11:59.396 "trtype": "TCP" 00:11:59.396 } 00:11:59.396 ] 00:11:59.396 }, 00:11:59.396 { 00:11:59.396 "name": "nvmf_tgt_poll_group_003", 00:11:59.396 "admin_qpairs": 2, 00:11:59.396 "io_qpairs": 84, 00:11:59.396 "current_admin_qpairs": 0, 00:11:59.396 "current_io_qpairs": 0, 00:11:59.396 "pending_bdev_io": 0, 00:11:59.396 "completed_nvme_io": 87, 00:11:59.396 "transports": [ 00:11:59.396 { 00:11:59.396 "trtype": "TCP" 00:11:59.396 } 00:11:59.396 ] 00:11:59.396 } 00:11:59.396 ] 00:11:59.396 }' 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:59.396 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.396 rmmod nvme_tcp 00:11:59.396 rmmod nvme_fabrics 00:11:59.396 rmmod nvme_keyring 00:11:59.396 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.397 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:59.397 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:59.397 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 566578 ']' 00:11:59.397 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 566578 00:11:59.397 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 566578 ']' 00:11:59.397 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 566578 00:11:59.397 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:11:59.655 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:59.655 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 566578 00:11:59.655 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:59.655 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:59.655 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 566578' 00:11:59.655 killing process with pid 566578 00:11:59.655 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 566578 00:11:59.655 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 566578 00:11:59.914 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.915 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.823 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.823 00:12:01.823 real 0m25.632s 00:12:01.823 user 1m22.558s 00:12:01.823 sys 0m4.409s 00:12:01.823 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.823 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.823 ************************************ 00:12:01.823 END TEST nvmf_rpc 00:12:01.823 ************************************ 00:12:01.824 12:23:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:01.824 12:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:01.824 12:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.824 12:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.824 ************************************ 00:12:01.824 START TEST nvmf_invalid 00:12:01.824 ************************************ 00:12:01.824 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:02.084 * Looking for test storage... 00:12:02.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:02.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.084 --rc genhtml_branch_coverage=1 00:12:02.084 --rc genhtml_function_coverage=1 00:12:02.084 --rc genhtml_legend=1 00:12:02.084 --rc geninfo_all_blocks=1 00:12:02.084 --rc geninfo_unexecuted_blocks=1 00:12:02.084 00:12:02.084 ' 00:12:02.084 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:02.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.084 --rc genhtml_branch_coverage=1 00:12:02.084 --rc genhtml_function_coverage=1 00:12:02.084 --rc genhtml_legend=1 00:12:02.084 --rc geninfo_all_blocks=1 00:12:02.085 --rc geninfo_unexecuted_blocks=1 00:12:02.085 00:12:02.085 ' 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:02.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.085 --rc genhtml_branch_coverage=1 00:12:02.085 --rc genhtml_function_coverage=1 00:12:02.085 --rc genhtml_legend=1 00:12:02.085 --rc geninfo_all_blocks=1 00:12:02.085 --rc geninfo_unexecuted_blocks=1 00:12:02.085 00:12:02.085 ' 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:02.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.085 --rc genhtml_branch_coverage=1 00:12:02.085 --rc genhtml_function_coverage=1 00:12:02.085 --rc genhtml_legend=1 00:12:02.085 --rc geninfo_all_blocks=1 00:12:02.085 --rc geninfo_unexecuted_blocks=1 00:12:02.085 00:12:02.085 ' 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.085 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:04.620 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.620 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:04.621 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:04.621 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:04.621 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:04.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:12:04.621 00:12:04.621 --- 10.0.0.2 ping statistics --- 00:12:04.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.621 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:12:04.621 00:12:04.621 --- 10.0.0.1 ping statistics --- 00:12:04.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.621 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:04.621 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:04.621 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=571084 00:12:04.621 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 571084 00:12:04.621 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.621 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 571084 ']' 00:12:04.621 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.621 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:04.621 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.621 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:04.621 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:04.621 [2024-10-30 12:23:37.055637] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:12:04.621 [2024-10-30 12:23:37.055759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.621 [2024-10-30 12:23:37.145215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.621 [2024-10-30 12:23:37.207132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.621 [2024-10-30 12:23:37.207188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.621 [2024-10-30 12:23:37.207212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.621 [2024-10-30 12:23:37.207224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.621 [2024-10-30 12:23:37.207235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.621 [2024-10-30 12:23:37.208876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.621 [2024-10-30 12:23:37.208945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.621 [2024-10-30 12:23:37.208979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.621 [2024-10-30 12:23:37.208984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.879 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:04.879 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:12:04.879 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.879 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:04.879 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:04.879 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.879 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:04.879 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2333 00:12:05.136 [2024-10-30 12:23:37.586789] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:05.136 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:05.136 { 00:12:05.136 "nqn": "nqn.2016-06.io.spdk:cnode2333", 00:12:05.136 "tgt_name": "foobar", 00:12:05.136 "method": "nvmf_create_subsystem", 00:12:05.136 "req_id": 1 00:12:05.136 } 00:12:05.136 Got JSON-RPC error response 00:12:05.136 response: 00:12:05.136 { 00:12:05.136 "code": -32603, 00:12:05.136 "message": "Unable to find target foobar" 00:12:05.136 }' 00:12:05.136 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:05.136 { 00:12:05.136 "nqn": "nqn.2016-06.io.spdk:cnode2333", 00:12:05.136 "tgt_name": "foobar", 00:12:05.136 "method": "nvmf_create_subsystem", 00:12:05.136 "req_id": 1 00:12:05.136 } 00:12:05.136 Got JSON-RPC error response 00:12:05.136 response: 00:12:05.136 { 00:12:05.136 "code": -32603, 00:12:05.136 "message": "Unable to find target foobar" 00:12:05.136 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:05.136 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:05.136 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12341 00:12:05.392 [2024-10-30 12:23:37.859699] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12341: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:05.392 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:05.392 { 00:12:05.392 "nqn": "nqn.2016-06.io.spdk:cnode12341", 00:12:05.392 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:05.392 "method": "nvmf_create_subsystem", 00:12:05.392 "req_id": 1 00:12:05.392 } 00:12:05.392 Got JSON-RPC error response 00:12:05.392 response: 00:12:05.392 { 00:12:05.392 "code": -32602, 00:12:05.392 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:05.392 }' 00:12:05.393 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:05.393 { 00:12:05.393 "nqn": "nqn.2016-06.io.spdk:cnode12341", 00:12:05.393 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:05.393 "method": "nvmf_create_subsystem", 00:12:05.393 "req_id": 1 00:12:05.393 } 00:12:05.393 Got JSON-RPC error response 00:12:05.393 response: 00:12:05.393 { 00:12:05.393 "code": -32602, 00:12:05.393 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:05.393 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:05.393 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:05.393 12:23:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6945 00:12:05.650 [2024-10-30 12:23:38.128591] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6945: invalid model number 'SPDK_Controller' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:05.650 { 00:12:05.650 "nqn": "nqn.2016-06.io.spdk:cnode6945", 00:12:05.650 "model_number": "SPDK_Controller\u001f", 00:12:05.650 "method": "nvmf_create_subsystem", 00:12:05.650 "req_id": 1 00:12:05.650 } 00:12:05.650 Got JSON-RPC error response 00:12:05.650 response: 00:12:05.650 { 00:12:05.650 "code": -32602, 00:12:05.650 "message": "Invalid MN SPDK_Controller\u001f" 00:12:05.650 }' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:05.650 { 00:12:05.650 "nqn": "nqn.2016-06.io.spdk:cnode6945", 00:12:05.650 "model_number": "SPDK_Controller\u001f", 00:12:05.650 "method": "nvmf_create_subsystem", 00:12:05.650 "req_id": 1 00:12:05.650 } 00:12:05.650 Got JSON-RPC error response 00:12:05.650 response: 00:12:05.650 { 00:12:05.650 "code": -32602, 00:12:05.650 "message": "Invalid MN SPDK_Controller\u001f" 00:12:05.650 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.650 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ y == \- ]] 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'y:CvVb(O&)U->)EKppM+)' 00:12:05.651 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'y:CvVb(O&)U->)EKppM+)' nqn.2016-06.io.spdk:cnode16666 00:12:05.910 [2024-10-30 12:23:38.501849] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16666: invalid serial number 'y:CvVb(O&)U->)EKppM+)' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:05.910 { 00:12:05.910 "nqn": "nqn.2016-06.io.spdk:cnode16666", 00:12:05.910 "serial_number": "y:CvVb(O&)U->)EKppM+)", 00:12:05.910 "method": "nvmf_create_subsystem", 00:12:05.910 "req_id": 1 00:12:05.910 } 00:12:05.910 Got JSON-RPC error response 00:12:05.910 response: 00:12:05.910 { 00:12:05.910 "code": -32602, 00:12:05.910 "message": "Invalid SN y:CvVb(O&)U->)EKppM+)" 00:12:05.910 }' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:05.910 { 00:12:05.910 "nqn": "nqn.2016-06.io.spdk:cnode16666", 00:12:05.910 "serial_number": "y:CvVb(O&)U->)EKppM+)", 00:12:05.910 "method": "nvmf_create_subsystem", 00:12:05.910 "req_id": 1 00:12:05.910 } 00:12:05.910 Got JSON-RPC error response 00:12:05.910 response: 00:12:05.910 { 00:12:05.910 "code": -32602, 00:12:05.910 "message": "Invalid SN y:CvVb(O&)U->)EKppM+)" 00:12:05.910 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:05.910 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:05.911 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '|l(*IS9}?>|e%mf#>$1Tu]^.Cwrinw8;9;agqZK]x' 00:12:06.169 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '|l(*IS9}?>|e%mf#>$1Tu]^.Cwrinw8;9;agqZK]x' nqn.2016-06.io.spdk:cnode20428 00:12:06.427 [2024-10-30 12:23:38.887116] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20428: invalid model number '|l(*IS9}?>|e%mf#>$1Tu]^.Cwrinw8;9;agqZK]x' 00:12:06.427 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:06.427 { 00:12:06.427 "nqn": "nqn.2016-06.io.spdk:cnode20428", 00:12:06.427 "model_number": "|l(*IS9}?>|e%mf#>$1Tu]^.Cwrinw8;9;agqZK]x", 00:12:06.427 "method": "nvmf_create_subsystem", 00:12:06.427 "req_id": 1 00:12:06.427 } 00:12:06.427 Got JSON-RPC error response 00:12:06.427 response: 00:12:06.427 { 00:12:06.427 "code": -32602, 00:12:06.427 "message": "Invalid MN |l(*IS9}?>|e%mf#>$1Tu]^.Cwrinw8;9;agqZK]x" 00:12:06.427 }' 00:12:06.427 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:06.427 { 00:12:06.427 "nqn": "nqn.2016-06.io.spdk:cnode20428", 00:12:06.427 "model_number": "|l(*IS9}?>|e%mf#>$1Tu]^.Cwrinw8;9;agqZK]x", 00:12:06.427 "method": "nvmf_create_subsystem", 00:12:06.427 "req_id": 1 00:12:06.427 } 00:12:06.427 Got JSON-RPC error response 00:12:06.427 response: 00:12:06.427 { 00:12:06.427 "code": -32602, 00:12:06.427 "message": "Invalid MN |l(*IS9}?>|e%mf#>$1Tu]^.Cwrinw8;9;agqZK]x" 00:12:06.427 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:06.427 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:06.685 [2024-10-30 12:23:39.152064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.685 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:06.942 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:06.942 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:06.942 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:06.942 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:06.942 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:07.199 [2024-10-30 12:23:39.693806] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:07.199 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:07.199 { 00:12:07.199 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:07.199 "listen_address": { 00:12:07.199 "trtype": "tcp", 00:12:07.199 "traddr": "", 00:12:07.199 "trsvcid": "4421" 00:12:07.199 }, 00:12:07.199 "method": "nvmf_subsystem_remove_listener", 00:12:07.199 "req_id": 1 00:12:07.199 } 00:12:07.199 Got JSON-RPC error response 00:12:07.199 response: 00:12:07.199 { 00:12:07.199 "code": -32602, 00:12:07.199 "message": "Invalid parameters" 00:12:07.199 }' 00:12:07.199 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:07.199 { 00:12:07.199 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:07.199 "listen_address": { 00:12:07.199 "trtype": "tcp", 00:12:07.199 "traddr": "", 00:12:07.199 "trsvcid": "4421" 00:12:07.199 }, 00:12:07.199 "method": "nvmf_subsystem_remove_listener", 00:12:07.199 "req_id": 1 00:12:07.199 } 00:12:07.199 Got JSON-RPC error response 00:12:07.199 response: 00:12:07.199 { 00:12:07.199 "code": -32602, 00:12:07.199 "message": "Invalid parameters" 00:12:07.199 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:07.199 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22649 -i 0 00:12:07.456 [2024-10-30 12:23:39.978742] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22649: invalid cntlid range [0-65519] 00:12:07.456 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:07.456 { 00:12:07.456 "nqn": "nqn.2016-06.io.spdk:cnode22649", 00:12:07.456 "min_cntlid": 0, 00:12:07.456 "method": "nvmf_create_subsystem", 00:12:07.456 "req_id": 1 00:12:07.456 } 00:12:07.456 Got JSON-RPC error response 00:12:07.456 response: 00:12:07.456 { 00:12:07.456 "code": -32602, 00:12:07.456 "message": "Invalid cntlid range [0-65519]" 00:12:07.456 }' 00:12:07.456 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:07.456 { 00:12:07.456 "nqn": "nqn.2016-06.io.spdk:cnode22649", 00:12:07.456 "min_cntlid": 0, 00:12:07.456 "method": "nvmf_create_subsystem", 00:12:07.456 "req_id": 1 00:12:07.456 } 00:12:07.456 Got JSON-RPC error response 00:12:07.456 response: 00:12:07.456 { 00:12:07.456 "code": -32602, 00:12:07.456 "message": "Invalid cntlid range [0-65519]" 00:12:07.456 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:07.456 12:23:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20380 -i 65520 00:12:07.713 [2024-10-30 12:23:40.251724] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20380: invalid cntlid range [65520-65519] 00:12:07.713 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:07.713 { 00:12:07.713 "nqn": "nqn.2016-06.io.spdk:cnode20380", 00:12:07.713 "min_cntlid": 65520, 00:12:07.713 "method": "nvmf_create_subsystem", 00:12:07.713 "req_id": 1 00:12:07.713 } 00:12:07.713 Got JSON-RPC error response 00:12:07.713 response: 00:12:07.713 { 00:12:07.713 "code": -32602, 00:12:07.713 "message": "Invalid cntlid range [65520-65519]" 00:12:07.713 }' 00:12:07.713 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:07.713 { 00:12:07.713 "nqn": "nqn.2016-06.io.spdk:cnode20380", 00:12:07.713 "min_cntlid": 65520, 00:12:07.713 "method": "nvmf_create_subsystem", 00:12:07.713 "req_id": 1 00:12:07.713 } 00:12:07.713 Got JSON-RPC error response 00:12:07.713 response: 00:12:07.713 { 00:12:07.713 "code": -32602, 00:12:07.713 "message": "Invalid cntlid range [65520-65519]" 00:12:07.713 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:07.713 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4617 -I 0 00:12:07.971 [2024-10-30 12:23:40.528583] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4617: invalid cntlid range [1-0] 00:12:07.971 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:07.971 { 00:12:07.971 "nqn": "nqn.2016-06.io.spdk:cnode4617", 00:12:07.971 "max_cntlid": 0, 00:12:07.971 "method": "nvmf_create_subsystem", 00:12:07.971 "req_id": 1 00:12:07.971 } 00:12:07.971 Got JSON-RPC error response 00:12:07.971 response: 00:12:07.971 { 00:12:07.971 "code": -32602, 00:12:07.971 "message": "Invalid cntlid range [1-0]" 00:12:07.971 }' 00:12:07.971 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:07.971 { 00:12:07.971 "nqn": "nqn.2016-06.io.spdk:cnode4617", 00:12:07.971 "max_cntlid": 0, 00:12:07.971 "method": "nvmf_create_subsystem", 00:12:07.971 "req_id": 1 00:12:07.971 } 00:12:07.971 Got JSON-RPC error response 00:12:07.971 response: 00:12:07.971 { 00:12:07.971 "code": -32602, 00:12:07.971 "message": "Invalid cntlid range [1-0]" 00:12:07.971 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:07.971 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16778 -I 65520 00:12:08.229 [2024-10-30 12:23:40.793466] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16778: invalid cntlid range [1-65520] 00:12:08.229 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:08.229 { 00:12:08.229 "nqn": "nqn.2016-06.io.spdk:cnode16778", 00:12:08.229 "max_cntlid": 65520, 00:12:08.229 "method": "nvmf_create_subsystem", 00:12:08.229 "req_id": 1 00:12:08.229 } 00:12:08.229 Got JSON-RPC error response 00:12:08.229 response: 00:12:08.229 { 00:12:08.229 "code": -32602, 00:12:08.229 "message": "Invalid cntlid range [1-65520]" 00:12:08.229 }' 00:12:08.229 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:08.229 { 00:12:08.229 "nqn": "nqn.2016-06.io.spdk:cnode16778", 00:12:08.229 "max_cntlid": 65520, 00:12:08.229 "method": "nvmf_create_subsystem", 00:12:08.229 "req_id": 1 00:12:08.229 } 00:12:08.229 Got JSON-RPC error response 00:12:08.229 response: 00:12:08.229 { 00:12:08.229 "code": -32602, 00:12:08.229 "message": "Invalid cntlid range [1-65520]" 00:12:08.229 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:08.229 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17194 -i 6 -I 5 00:12:08.487 [2024-10-30 12:23:41.062374] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17194: invalid cntlid range [6-5] 00:12:08.487 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:08.487 { 00:12:08.487 "nqn": "nqn.2016-06.io.spdk:cnode17194", 00:12:08.487 "min_cntlid": 6, 00:12:08.487 "max_cntlid": 5, 00:12:08.487 "method": "nvmf_create_subsystem", 00:12:08.487 "req_id": 1 00:12:08.487 } 00:12:08.487 Got JSON-RPC error response 00:12:08.487 response: 00:12:08.487 { 00:12:08.487 "code": -32602, 00:12:08.487 "message": "Invalid cntlid range [6-5]" 00:12:08.487 }' 00:12:08.487 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:08.487 { 00:12:08.487 "nqn": "nqn.2016-06.io.spdk:cnode17194", 00:12:08.487 "min_cntlid": 6, 00:12:08.487 "max_cntlid": 5, 00:12:08.487 "method": "nvmf_create_subsystem", 00:12:08.487 "req_id": 1 00:12:08.487 } 00:12:08.487 Got JSON-RPC error response 00:12:08.487 response: 00:12:08.487 { 00:12:08.487 "code": -32602, 00:12:08.487 "message": "Invalid cntlid range [6-5]" 00:12:08.487 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:08.487 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:08.746 { 00:12:08.746 "name": "foobar", 00:12:08.746 "method": "nvmf_delete_target", 00:12:08.746 "req_id": 1 00:12:08.746 } 00:12:08.746 Got JSON-RPC error response 00:12:08.746 response: 00:12:08.746 { 00:12:08.746 "code": -32602, 00:12:08.746 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:08.746 }' 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:08.746 { 00:12:08.746 "name": "foobar", 00:12:08.746 "method": "nvmf_delete_target", 00:12:08.746 "req_id": 1 00:12:08.746 } 00:12:08.746 Got JSON-RPC error response 00:12:08.746 response: 00:12:08.746 { 00:12:08.746 "code": -32602, 00:12:08.746 "message": "The specified target doesn't exist, cannot delete it." 00:12:08.746 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:08.746 rmmod nvme_tcp 00:12:08.746 rmmod nvme_fabrics 00:12:08.746 rmmod nvme_keyring 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 571084 ']' 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 571084 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 571084 ']' 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 571084 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 571084 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 571084' 00:12:08.746 killing process with pid 571084 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 571084 00:12:08.746 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 571084 00:12:09.004 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:09.004 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:09.004 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:09.005 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:09.005 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:09.005 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:09.005 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:09.005 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.005 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:09.005 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.005 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.005 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.914 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:10.914 00:12:10.914 real 0m9.103s 00:12:10.914 user 0m21.252s 00:12:10.914 sys 0m2.609s 00:12:10.914 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:10.914 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:10.914 ************************************ 00:12:10.914 END TEST nvmf_invalid 00:12:10.914 ************************************ 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.175 ************************************ 00:12:11.175 START TEST nvmf_connect_stress 00:12:11.175 ************************************ 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:11.175 * Looking for test storage... 00:12:11.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:11.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.175 --rc genhtml_branch_coverage=1 00:12:11.175 --rc genhtml_function_coverage=1 00:12:11.175 --rc genhtml_legend=1 00:12:11.175 --rc geninfo_all_blocks=1 00:12:11.175 --rc geninfo_unexecuted_blocks=1 00:12:11.175 00:12:11.175 ' 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:11.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.175 --rc genhtml_branch_coverage=1 00:12:11.175 --rc genhtml_function_coverage=1 00:12:11.175 --rc genhtml_legend=1 00:12:11.175 --rc geninfo_all_blocks=1 00:12:11.175 --rc geninfo_unexecuted_blocks=1 00:12:11.175 00:12:11.175 ' 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:11.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.175 --rc genhtml_branch_coverage=1 00:12:11.175 --rc genhtml_function_coverage=1 00:12:11.175 --rc genhtml_legend=1 00:12:11.175 --rc geninfo_all_blocks=1 00:12:11.175 --rc geninfo_unexecuted_blocks=1 00:12:11.175 00:12:11.175 ' 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:11.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.175 --rc genhtml_branch_coverage=1 00:12:11.175 --rc genhtml_function_coverage=1 00:12:11.175 --rc genhtml_legend=1 00:12:11.175 --rc geninfo_all_blocks=1 00:12:11.175 --rc geninfo_unexecuted_blocks=1 00:12:11.175 00:12:11.175 ' 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.175 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:11.176 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:13.711 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:13.711 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:13.711 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:13.711 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:13.711 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.712 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:13.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:12:13.712 00:12:13.712 --- 10.0.0.2 ping statistics --- 00:12:13.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.712 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:12:13.712 00:12:13.712 --- 10.0.0.1 ping statistics --- 00:12:13.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.712 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=573728 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 573728 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 573728 ']' 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:13.712 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.712 [2024-10-30 12:23:46.172111] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:12:13.712 [2024-10-30 12:23:46.172189] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.712 [2024-10-30 12:23:46.245816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:13.712 [2024-10-30 12:23:46.302205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.712 [2024-10-30 12:23:46.302282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.712 [2024-10-30 12:23:46.302311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.712 [2024-10-30 12:23:46.302323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.712 [2024-10-30 12:23:46.302332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.712 [2024-10-30 12:23:46.303861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.712 [2024-10-30 12:23:46.303977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.712 [2024-10-30 12:23:46.303980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 [2024-10-30 12:23:46.447927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 [2024-10-30 12:23:46.465181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 NULL1 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=573866 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.971 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.972 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.229 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.229 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:14.229 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.229 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.229 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.795 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.795 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:14.795 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.795 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.795 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.053 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.053 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:15.053 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.053 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.053 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.311 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.311 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:15.311 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.311 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.311 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.578 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.578 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:15.578 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.578 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.578 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.836 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.836 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:15.836 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.836 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.836 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.094 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.094 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:16.094 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.094 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.094 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.659 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.659 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:16.659 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.659 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.659 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.916 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.916 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:16.916 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.916 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.916 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.173 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.173 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:17.173 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.173 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.173 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.430 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.430 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:17.430 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.430 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.430 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.995 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.995 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:17.995 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.995 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.995 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.252 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.252 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:18.252 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.252 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.252 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.509 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.509 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:18.509 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.509 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.509 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.767 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.767 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:18.767 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.767 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.767 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.024 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.024 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:19.024 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.024 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.024 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.592 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.592 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:19.592 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.592 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.592 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.850 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.850 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:19.850 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.850 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.850 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.108 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.108 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:20.108 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.108 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.108 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.365 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.365 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:20.365 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.365 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.365 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.622 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.622 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:20.622 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.622 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.622 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.187 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.187 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:21.187 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.187 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.187 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.445 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.445 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:21.445 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.445 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.445 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.703 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.703 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:21.703 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.703 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.703 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.961 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.961 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:21.961 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.961 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.961 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.219 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.219 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:22.219 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.219 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.219 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.784 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.784 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:22.784 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.784 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.784 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.041 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.041 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:23.041 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.042 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.042 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.299 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.299 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:23.299 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.299 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.299 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.557 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.557 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:23.557 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.557 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.557 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.815 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.815 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:23.815 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.815 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.815 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.072 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573866 00:12:24.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (573866) - No such process 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 573866 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.330 rmmod nvme_tcp 00:12:24.330 rmmod nvme_fabrics 00:12:24.330 rmmod nvme_keyring 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 573728 ']' 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 573728 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 573728 ']' 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 573728 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 573728 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 573728' 00:12:24.330 killing process with pid 573728 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 573728 00:12:24.330 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 573728 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.590 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:27.129 00:12:27.129 real 0m15.566s 00:12:27.129 user 0m38.620s 00:12:27.129 sys 0m5.960s 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.129 ************************************ 00:12:27.129 END TEST nvmf_connect_stress 00:12:27.129 ************************************ 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:27.129 ************************************ 00:12:27.129 START TEST nvmf_fused_ordering 00:12:27.129 ************************************ 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:27.129 * Looking for test storage... 00:12:27.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:27.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.129 --rc genhtml_branch_coverage=1 00:12:27.129 --rc genhtml_function_coverage=1 00:12:27.129 --rc genhtml_legend=1 00:12:27.129 --rc geninfo_all_blocks=1 00:12:27.129 --rc geninfo_unexecuted_blocks=1 00:12:27.129 00:12:27.129 ' 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:27.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.129 --rc genhtml_branch_coverage=1 00:12:27.129 --rc genhtml_function_coverage=1 00:12:27.129 --rc genhtml_legend=1 00:12:27.129 --rc geninfo_all_blocks=1 00:12:27.129 --rc geninfo_unexecuted_blocks=1 00:12:27.129 00:12:27.129 ' 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:27.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.129 --rc genhtml_branch_coverage=1 00:12:27.129 --rc genhtml_function_coverage=1 00:12:27.129 --rc genhtml_legend=1 00:12:27.129 --rc geninfo_all_blocks=1 00:12:27.129 --rc geninfo_unexecuted_blocks=1 00:12:27.129 00:12:27.129 ' 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:27.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.129 --rc genhtml_branch_coverage=1 00:12:27.129 --rc genhtml_function_coverage=1 00:12:27.129 --rc genhtml_legend=1 00:12:27.129 --rc geninfo_all_blocks=1 00:12:27.129 --rc geninfo_unexecuted_blocks=1 00:12:27.129 00:12:27.129 ' 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.129 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.130 12:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:29.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:29.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:29.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:29.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.031 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:12:29.032 00:12:29.032 --- 10.0.0.2 ping statistics --- 00:12:29.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.032 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:12:29.032 00:12:29.032 --- 10.0.0.1 ping statistics --- 00:12:29.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.032 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:29.032 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:29.290 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:29.290 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.290 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.290 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.290 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=577121 00:12:29.291 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:29.291 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 577121 00:12:29.291 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 577121 ']' 00:12:29.291 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.291 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:29.291 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.291 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:29.291 12:24:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.291 [2024-10-30 12:24:01.770003] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:12:29.291 [2024-10-30 12:24:01.770087] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.291 [2024-10-30 12:24:01.846646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.291 [2024-10-30 12:24:01.903747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.291 [2024-10-30 12:24:01.903812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.291 [2024-10-30 12:24:01.903836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.291 [2024-10-30 12:24:01.903847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.291 [2024-10-30 12:24:01.903856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.291 [2024-10-30 12:24:01.904426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.549 [2024-10-30 12:24:02.049087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.549 [2024-10-30 12:24:02.065335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.549 NULL1 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.549 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:29.549 [2024-10-30 12:24:02.112681] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:12:29.549 [2024-10-30 12:24:02.112721] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577160 ] 00:12:30.114 Attached to nqn.2016-06.io.spdk:cnode1 00:12:30.114 Namespace ID: 1 size: 1GB 00:12:30.114 fused_ordering(0) 00:12:30.114 fused_ordering(1) 00:12:30.114 fused_ordering(2) 00:12:30.114 fused_ordering(3) 00:12:30.114 fused_ordering(4) 00:12:30.114 fused_ordering(5) 00:12:30.114 fused_ordering(6) 00:12:30.114 fused_ordering(7) 00:12:30.114 fused_ordering(8) 00:12:30.114 fused_ordering(9) 00:12:30.114 fused_ordering(10) 00:12:30.114 fused_ordering(11) 00:12:30.114 fused_ordering(12) 00:12:30.114 fused_ordering(13) 00:12:30.114 fused_ordering(14) 00:12:30.114 fused_ordering(15) 00:12:30.114 fused_ordering(16) 00:12:30.114 fused_ordering(17) 00:12:30.114 fused_ordering(18) 00:12:30.114 fused_ordering(19) 00:12:30.114 fused_ordering(20) 00:12:30.114 fused_ordering(21) 00:12:30.114 fused_ordering(22) 00:12:30.114 fused_ordering(23) 00:12:30.114 fused_ordering(24) 00:12:30.114 fused_ordering(25) 00:12:30.114 fused_ordering(26) 00:12:30.114 fused_ordering(27) 00:12:30.114 fused_ordering(28) 00:12:30.114 fused_ordering(29) 00:12:30.114 fused_ordering(30) 00:12:30.114 fused_ordering(31) 00:12:30.114 fused_ordering(32) 00:12:30.114 fused_ordering(33) 00:12:30.114 fused_ordering(34) 00:12:30.114 fused_ordering(35) 00:12:30.114 fused_ordering(36) 00:12:30.114 fused_ordering(37) 00:12:30.114 fused_ordering(38) 00:12:30.114 fused_ordering(39) 00:12:30.114 fused_ordering(40) 00:12:30.114 fused_ordering(41) 00:12:30.114 fused_ordering(42) 00:12:30.115 fused_ordering(43) 00:12:30.115 fused_ordering(44) 00:12:30.115 fused_ordering(45) 00:12:30.115 fused_ordering(46) 00:12:30.115 fused_ordering(47) 00:12:30.115 fused_ordering(48) 00:12:30.115 fused_ordering(49) 00:12:30.115 fused_ordering(50) 00:12:30.115 fused_ordering(51) 00:12:30.115 fused_ordering(52) 00:12:30.115 fused_ordering(53) 00:12:30.115 fused_ordering(54) 00:12:30.115 fused_ordering(55) 00:12:30.115 fused_ordering(56) 00:12:30.115 fused_ordering(57) 00:12:30.115 fused_ordering(58) 00:12:30.115 fused_ordering(59) 00:12:30.115 fused_ordering(60) 00:12:30.115 fused_ordering(61) 00:12:30.115 fused_ordering(62) 00:12:30.115 fused_ordering(63) 00:12:30.115 fused_ordering(64) 00:12:30.115 fused_ordering(65) 00:12:30.115 fused_ordering(66) 00:12:30.115 fused_ordering(67) 00:12:30.115 fused_ordering(68) 00:12:30.115 fused_ordering(69) 00:12:30.115 fused_ordering(70) 00:12:30.115 fused_ordering(71) 00:12:30.115 fused_ordering(72) 00:12:30.115 fused_ordering(73) 00:12:30.115 fused_ordering(74) 00:12:30.115 fused_ordering(75) 00:12:30.115 fused_ordering(76) 00:12:30.115 fused_ordering(77) 00:12:30.115 fused_ordering(78) 00:12:30.115 fused_ordering(79) 00:12:30.115 fused_ordering(80) 00:12:30.115 fused_ordering(81) 00:12:30.115 fused_ordering(82) 00:12:30.115 fused_ordering(83) 00:12:30.115 fused_ordering(84) 00:12:30.115 fused_ordering(85) 00:12:30.115 fused_ordering(86) 00:12:30.115 fused_ordering(87) 00:12:30.115 fused_ordering(88) 00:12:30.115 fused_ordering(89) 00:12:30.115 fused_ordering(90) 00:12:30.115 fused_ordering(91) 00:12:30.115 fused_ordering(92) 00:12:30.115 fused_ordering(93) 00:12:30.115 fused_ordering(94) 00:12:30.115 fused_ordering(95) 00:12:30.115 fused_ordering(96) 00:12:30.115 fused_ordering(97) 00:12:30.115 fused_ordering(98) 00:12:30.115 fused_ordering(99) 00:12:30.115 fused_ordering(100) 00:12:30.115 fused_ordering(101) 00:12:30.115 fused_ordering(102) 00:12:30.115 fused_ordering(103) 00:12:30.115 fused_ordering(104) 00:12:30.115 fused_ordering(105) 00:12:30.115 fused_ordering(106) 00:12:30.115 fused_ordering(107) 00:12:30.115 fused_ordering(108) 00:12:30.115 fused_ordering(109) 00:12:30.115 fused_ordering(110) 00:12:30.115 fused_ordering(111) 00:12:30.115 fused_ordering(112) 00:12:30.115 fused_ordering(113) 00:12:30.115 fused_ordering(114) 00:12:30.115 fused_ordering(115) 00:12:30.115 fused_ordering(116) 00:12:30.115 fused_ordering(117) 00:12:30.115 fused_ordering(118) 00:12:30.115 fused_ordering(119) 00:12:30.115 fused_ordering(120) 00:12:30.115 fused_ordering(121) 00:12:30.115 fused_ordering(122) 00:12:30.115 fused_ordering(123) 00:12:30.115 fused_ordering(124) 00:12:30.115 fused_ordering(125) 00:12:30.115 fused_ordering(126) 00:12:30.115 fused_ordering(127) 00:12:30.115 fused_ordering(128) 00:12:30.115 fused_ordering(129) 00:12:30.115 fused_ordering(130) 00:12:30.115 fused_ordering(131) 00:12:30.115 fused_ordering(132) 00:12:30.115 fused_ordering(133) 00:12:30.115 fused_ordering(134) 00:12:30.115 fused_ordering(135) 00:12:30.115 fused_ordering(136) 00:12:30.115 fused_ordering(137) 00:12:30.115 fused_ordering(138) 00:12:30.115 fused_ordering(139) 00:12:30.115 fused_ordering(140) 00:12:30.115 fused_ordering(141) 00:12:30.115 fused_ordering(142) 00:12:30.115 fused_ordering(143) 00:12:30.115 fused_ordering(144) 00:12:30.115 fused_ordering(145) 00:12:30.115 fused_ordering(146) 00:12:30.115 fused_ordering(147) 00:12:30.115 fused_ordering(148) 00:12:30.115 fused_ordering(149) 00:12:30.115 fused_ordering(150) 00:12:30.115 fused_ordering(151) 00:12:30.115 fused_ordering(152) 00:12:30.115 fused_ordering(153) 00:12:30.115 fused_ordering(154) 00:12:30.115 fused_ordering(155) 00:12:30.115 fused_ordering(156) 00:12:30.115 fused_ordering(157) 00:12:30.115 fused_ordering(158) 00:12:30.115 fused_ordering(159) 00:12:30.115 fused_ordering(160) 00:12:30.115 fused_ordering(161) 00:12:30.115 fused_ordering(162) 00:12:30.115 fused_ordering(163) 00:12:30.115 fused_ordering(164) 00:12:30.115 fused_ordering(165) 00:12:30.115 fused_ordering(166) 00:12:30.115 fused_ordering(167) 00:12:30.115 fused_ordering(168) 00:12:30.115 fused_ordering(169) 00:12:30.115 fused_ordering(170) 00:12:30.115 fused_ordering(171) 00:12:30.115 fused_ordering(172) 00:12:30.115 fused_ordering(173) 00:12:30.115 fused_ordering(174) 00:12:30.115 fused_ordering(175) 00:12:30.115 fused_ordering(176) 00:12:30.115 fused_ordering(177) 00:12:30.115 fused_ordering(178) 00:12:30.115 fused_ordering(179) 00:12:30.115 fused_ordering(180) 00:12:30.115 fused_ordering(181) 00:12:30.115 fused_ordering(182) 00:12:30.115 fused_ordering(183) 00:12:30.115 fused_ordering(184) 00:12:30.115 fused_ordering(185) 00:12:30.115 fused_ordering(186) 00:12:30.115 fused_ordering(187) 00:12:30.115 fused_ordering(188) 00:12:30.115 fused_ordering(189) 00:12:30.115 fused_ordering(190) 00:12:30.115 fused_ordering(191) 00:12:30.115 fused_ordering(192) 00:12:30.115 fused_ordering(193) 00:12:30.115 fused_ordering(194) 00:12:30.115 fused_ordering(195) 00:12:30.115 fused_ordering(196) 00:12:30.115 fused_ordering(197) 00:12:30.115 fused_ordering(198) 00:12:30.115 fused_ordering(199) 00:12:30.115 fused_ordering(200) 00:12:30.115 fused_ordering(201) 00:12:30.115 fused_ordering(202) 00:12:30.115 fused_ordering(203) 00:12:30.115 fused_ordering(204) 00:12:30.115 fused_ordering(205) 00:12:30.373 fused_ordering(206) 00:12:30.373 fused_ordering(207) 00:12:30.373 fused_ordering(208) 00:12:30.373 fused_ordering(209) 00:12:30.373 fused_ordering(210) 00:12:30.373 fused_ordering(211) 00:12:30.373 fused_ordering(212) 00:12:30.373 fused_ordering(213) 00:12:30.373 fused_ordering(214) 00:12:30.373 fused_ordering(215) 00:12:30.373 fused_ordering(216) 00:12:30.373 fused_ordering(217) 00:12:30.373 fused_ordering(218) 00:12:30.373 fused_ordering(219) 00:12:30.373 fused_ordering(220) 00:12:30.373 fused_ordering(221) 00:12:30.373 fused_ordering(222) 00:12:30.373 fused_ordering(223) 00:12:30.373 fused_ordering(224) 00:12:30.373 fused_ordering(225) 00:12:30.373 fused_ordering(226) 00:12:30.373 fused_ordering(227) 00:12:30.373 fused_ordering(228) 00:12:30.373 fused_ordering(229) 00:12:30.373 fused_ordering(230) 00:12:30.373 fused_ordering(231) 00:12:30.373 fused_ordering(232) 00:12:30.373 fused_ordering(233) 00:12:30.373 fused_ordering(234) 00:12:30.373 fused_ordering(235) 00:12:30.373 fused_ordering(236) 00:12:30.373 fused_ordering(237) 00:12:30.373 fused_ordering(238) 00:12:30.373 fused_ordering(239) 00:12:30.373 fused_ordering(240) 00:12:30.373 fused_ordering(241) 00:12:30.373 fused_ordering(242) 00:12:30.373 fused_ordering(243) 00:12:30.373 fused_ordering(244) 00:12:30.373 fused_ordering(245) 00:12:30.373 fused_ordering(246) 00:12:30.373 fused_ordering(247) 00:12:30.373 fused_ordering(248) 00:12:30.373 fused_ordering(249) 00:12:30.373 fused_ordering(250) 00:12:30.373 fused_ordering(251) 00:12:30.373 fused_ordering(252) 00:12:30.373 fused_ordering(253) 00:12:30.373 fused_ordering(254) 00:12:30.373 fused_ordering(255) 00:12:30.373 fused_ordering(256) 00:12:30.373 fused_ordering(257) 00:12:30.373 fused_ordering(258) 00:12:30.373 fused_ordering(259) 00:12:30.373 fused_ordering(260) 00:12:30.373 fused_ordering(261) 00:12:30.373 fused_ordering(262) 00:12:30.373 fused_ordering(263) 00:12:30.373 fused_ordering(264) 00:12:30.373 fused_ordering(265) 00:12:30.373 fused_ordering(266) 00:12:30.373 fused_ordering(267) 00:12:30.373 fused_ordering(268) 00:12:30.373 fused_ordering(269) 00:12:30.373 fused_ordering(270) 00:12:30.373 fused_ordering(271) 00:12:30.373 fused_ordering(272) 00:12:30.373 fused_ordering(273) 00:12:30.373 fused_ordering(274) 00:12:30.373 fused_ordering(275) 00:12:30.373 fused_ordering(276) 00:12:30.373 fused_ordering(277) 00:12:30.373 fused_ordering(278) 00:12:30.373 fused_ordering(279) 00:12:30.374 fused_ordering(280) 00:12:30.374 fused_ordering(281) 00:12:30.374 fused_ordering(282) 00:12:30.374 fused_ordering(283) 00:12:30.374 fused_ordering(284) 00:12:30.374 fused_ordering(285) 00:12:30.374 fused_ordering(286) 00:12:30.374 fused_ordering(287) 00:12:30.374 fused_ordering(288) 00:12:30.374 fused_ordering(289) 00:12:30.374 fused_ordering(290) 00:12:30.374 fused_ordering(291) 00:12:30.374 fused_ordering(292) 00:12:30.374 fused_ordering(293) 00:12:30.374 fused_ordering(294) 00:12:30.374 fused_ordering(295) 00:12:30.374 fused_ordering(296) 00:12:30.374 fused_ordering(297) 00:12:30.374 fused_ordering(298) 00:12:30.374 fused_ordering(299) 00:12:30.374 fused_ordering(300) 00:12:30.374 fused_ordering(301) 00:12:30.374 fused_ordering(302) 00:12:30.374 fused_ordering(303) 00:12:30.374 fused_ordering(304) 00:12:30.374 fused_ordering(305) 00:12:30.374 fused_ordering(306) 00:12:30.374 fused_ordering(307) 00:12:30.374 fused_ordering(308) 00:12:30.374 fused_ordering(309) 00:12:30.374 fused_ordering(310) 00:12:30.374 fused_ordering(311) 00:12:30.374 fused_ordering(312) 00:12:30.374 fused_ordering(313) 00:12:30.374 fused_ordering(314) 00:12:30.374 fused_ordering(315) 00:12:30.374 fused_ordering(316) 00:12:30.374 fused_ordering(317) 00:12:30.374 fused_ordering(318) 00:12:30.374 fused_ordering(319) 00:12:30.374 fused_ordering(320) 00:12:30.374 fused_ordering(321) 00:12:30.374 fused_ordering(322) 00:12:30.374 fused_ordering(323) 00:12:30.374 fused_ordering(324) 00:12:30.374 fused_ordering(325) 00:12:30.374 fused_ordering(326) 00:12:30.374 fused_ordering(327) 00:12:30.374 fused_ordering(328) 00:12:30.374 fused_ordering(329) 00:12:30.374 fused_ordering(330) 00:12:30.374 fused_ordering(331) 00:12:30.374 fused_ordering(332) 00:12:30.374 fused_ordering(333) 00:12:30.374 fused_ordering(334) 00:12:30.374 fused_ordering(335) 00:12:30.374 fused_ordering(336) 00:12:30.374 fused_ordering(337) 00:12:30.374 fused_ordering(338) 00:12:30.374 fused_ordering(339) 00:12:30.374 fused_ordering(340) 00:12:30.374 fused_ordering(341) 00:12:30.374 fused_ordering(342) 00:12:30.374 fused_ordering(343) 00:12:30.374 fused_ordering(344) 00:12:30.374 fused_ordering(345) 00:12:30.374 fused_ordering(346) 00:12:30.374 fused_ordering(347) 00:12:30.374 fused_ordering(348) 00:12:30.374 fused_ordering(349) 00:12:30.374 fused_ordering(350) 00:12:30.374 fused_ordering(351) 00:12:30.374 fused_ordering(352) 00:12:30.374 fused_ordering(353) 00:12:30.374 fused_ordering(354) 00:12:30.374 fused_ordering(355) 00:12:30.374 fused_ordering(356) 00:12:30.374 fused_ordering(357) 00:12:30.374 fused_ordering(358) 00:12:30.374 fused_ordering(359) 00:12:30.374 fused_ordering(360) 00:12:30.374 fused_ordering(361) 00:12:30.374 fused_ordering(362) 00:12:30.374 fused_ordering(363) 00:12:30.374 fused_ordering(364) 00:12:30.374 fused_ordering(365) 00:12:30.374 fused_ordering(366) 00:12:30.374 fused_ordering(367) 00:12:30.374 fused_ordering(368) 00:12:30.374 fused_ordering(369) 00:12:30.374 fused_ordering(370) 00:12:30.374 fused_ordering(371) 00:12:30.374 fused_ordering(372) 00:12:30.374 fused_ordering(373) 00:12:30.374 fused_ordering(374) 00:12:30.374 fused_ordering(375) 00:12:30.374 fused_ordering(376) 00:12:30.374 fused_ordering(377) 00:12:30.374 fused_ordering(378) 00:12:30.374 fused_ordering(379) 00:12:30.374 fused_ordering(380) 00:12:30.374 fused_ordering(381) 00:12:30.374 fused_ordering(382) 00:12:30.374 fused_ordering(383) 00:12:30.374 fused_ordering(384) 00:12:30.374 fused_ordering(385) 00:12:30.374 fused_ordering(386) 00:12:30.374 fused_ordering(387) 00:12:30.374 fused_ordering(388) 00:12:30.374 fused_ordering(389) 00:12:30.374 fused_ordering(390) 00:12:30.374 fused_ordering(391) 00:12:30.374 fused_ordering(392) 00:12:30.374 fused_ordering(393) 00:12:30.374 fused_ordering(394) 00:12:30.374 fused_ordering(395) 00:12:30.374 fused_ordering(396) 00:12:30.374 fused_ordering(397) 00:12:30.374 fused_ordering(398) 00:12:30.374 fused_ordering(399) 00:12:30.374 fused_ordering(400) 00:12:30.374 fused_ordering(401) 00:12:30.374 fused_ordering(402) 00:12:30.374 fused_ordering(403) 00:12:30.374 fused_ordering(404) 00:12:30.374 fused_ordering(405) 00:12:30.374 fused_ordering(406) 00:12:30.374 fused_ordering(407) 00:12:30.374 fused_ordering(408) 00:12:30.374 fused_ordering(409) 00:12:30.374 fused_ordering(410) 00:12:30.940 fused_ordering(411) 00:12:30.940 fused_ordering(412) 00:12:30.940 fused_ordering(413) 00:12:30.940 fused_ordering(414) 00:12:30.940 fused_ordering(415) 00:12:30.940 fused_ordering(416) 00:12:30.940 fused_ordering(417) 00:12:30.940 fused_ordering(418) 00:12:30.940 fused_ordering(419) 00:12:30.940 fused_ordering(420) 00:12:30.940 fused_ordering(421) 00:12:30.940 fused_ordering(422) 00:12:30.940 fused_ordering(423) 00:12:30.940 fused_ordering(424) 00:12:30.940 fused_ordering(425) 00:12:30.940 fused_ordering(426) 00:12:30.940 fused_ordering(427) 00:12:30.940 fused_ordering(428) 00:12:30.940 fused_ordering(429) 00:12:30.940 fused_ordering(430) 00:12:30.940 fused_ordering(431) 00:12:30.940 fused_ordering(432) 00:12:30.940 fused_ordering(433) 00:12:30.940 fused_ordering(434) 00:12:30.940 fused_ordering(435) 00:12:30.940 fused_ordering(436) 00:12:30.940 fused_ordering(437) 00:12:30.940 fused_ordering(438) 00:12:30.940 fused_ordering(439) 00:12:30.940 fused_ordering(440) 00:12:30.940 fused_ordering(441) 00:12:30.940 fused_ordering(442) 00:12:30.940 fused_ordering(443) 00:12:30.940 fused_ordering(444) 00:12:30.940 fused_ordering(445) 00:12:30.940 fused_ordering(446) 00:12:30.940 fused_ordering(447) 00:12:30.940 fused_ordering(448) 00:12:30.940 fused_ordering(449) 00:12:30.940 fused_ordering(450) 00:12:30.940 fused_ordering(451) 00:12:30.940 fused_ordering(452) 00:12:30.940 fused_ordering(453) 00:12:30.940 fused_ordering(454) 00:12:30.940 fused_ordering(455) 00:12:30.940 fused_ordering(456) 00:12:30.940 fused_ordering(457) 00:12:30.940 fused_ordering(458) 00:12:30.940 fused_ordering(459) 00:12:30.940 fused_ordering(460) 00:12:30.940 fused_ordering(461) 00:12:30.940 fused_ordering(462) 00:12:30.940 fused_ordering(463) 00:12:30.940 fused_ordering(464) 00:12:30.940 fused_ordering(465) 00:12:30.940 fused_ordering(466) 00:12:30.940 fused_ordering(467) 00:12:30.940 fused_ordering(468) 00:12:30.940 fused_ordering(469) 00:12:30.940 fused_ordering(470) 00:12:30.940 fused_ordering(471) 00:12:30.940 fused_ordering(472) 00:12:30.940 fused_ordering(473) 00:12:30.940 fused_ordering(474) 00:12:30.940 fused_ordering(475) 00:12:30.940 fused_ordering(476) 00:12:30.940 fused_ordering(477) 00:12:30.940 fused_ordering(478) 00:12:30.940 fused_ordering(479) 00:12:30.940 fused_ordering(480) 00:12:30.940 fused_ordering(481) 00:12:30.940 fused_ordering(482) 00:12:30.940 fused_ordering(483) 00:12:30.940 fused_ordering(484) 00:12:30.940 fused_ordering(485) 00:12:30.940 fused_ordering(486) 00:12:30.940 fused_ordering(487) 00:12:30.940 fused_ordering(488) 00:12:30.940 fused_ordering(489) 00:12:30.940 fused_ordering(490) 00:12:30.940 fused_ordering(491) 00:12:30.940 fused_ordering(492) 00:12:30.940 fused_ordering(493) 00:12:30.940 fused_ordering(494) 00:12:30.940 fused_ordering(495) 00:12:30.940 fused_ordering(496) 00:12:30.940 fused_ordering(497) 00:12:30.940 fused_ordering(498) 00:12:30.940 fused_ordering(499) 00:12:30.940 fused_ordering(500) 00:12:30.940 fused_ordering(501) 00:12:30.940 fused_ordering(502) 00:12:30.940 fused_ordering(503) 00:12:30.940 fused_ordering(504) 00:12:30.940 fused_ordering(505) 00:12:30.940 fused_ordering(506) 00:12:30.940 fused_ordering(507) 00:12:30.940 fused_ordering(508) 00:12:30.940 fused_ordering(509) 00:12:30.940 fused_ordering(510) 00:12:30.940 fused_ordering(511) 00:12:30.940 fused_ordering(512) 00:12:30.940 fused_ordering(513) 00:12:30.940 fused_ordering(514) 00:12:30.940 fused_ordering(515) 00:12:30.940 fused_ordering(516) 00:12:30.940 fused_ordering(517) 00:12:30.940 fused_ordering(518) 00:12:30.940 fused_ordering(519) 00:12:30.940 fused_ordering(520) 00:12:30.940 fused_ordering(521) 00:12:30.940 fused_ordering(522) 00:12:30.940 fused_ordering(523) 00:12:30.940 fused_ordering(524) 00:12:30.940 fused_ordering(525) 00:12:30.940 fused_ordering(526) 00:12:30.940 fused_ordering(527) 00:12:30.940 fused_ordering(528) 00:12:30.940 fused_ordering(529) 00:12:30.940 fused_ordering(530) 00:12:30.940 fused_ordering(531) 00:12:30.940 fused_ordering(532) 00:12:30.940 fused_ordering(533) 00:12:30.940 fused_ordering(534) 00:12:30.940 fused_ordering(535) 00:12:30.940 fused_ordering(536) 00:12:30.940 fused_ordering(537) 00:12:30.940 fused_ordering(538) 00:12:30.940 fused_ordering(539) 00:12:30.940 fused_ordering(540) 00:12:30.940 fused_ordering(541) 00:12:30.940 fused_ordering(542) 00:12:30.940 fused_ordering(543) 00:12:30.940 fused_ordering(544) 00:12:30.940 fused_ordering(545) 00:12:30.940 fused_ordering(546) 00:12:30.940 fused_ordering(547) 00:12:30.940 fused_ordering(548) 00:12:30.940 fused_ordering(549) 00:12:30.940 fused_ordering(550) 00:12:30.940 fused_ordering(551) 00:12:30.940 fused_ordering(552) 00:12:30.940 fused_ordering(553) 00:12:30.940 fused_ordering(554) 00:12:30.940 fused_ordering(555) 00:12:30.940 fused_ordering(556) 00:12:30.940 fused_ordering(557) 00:12:30.940 fused_ordering(558) 00:12:30.940 fused_ordering(559) 00:12:30.940 fused_ordering(560) 00:12:30.940 fused_ordering(561) 00:12:30.940 fused_ordering(562) 00:12:30.940 fused_ordering(563) 00:12:30.940 fused_ordering(564) 00:12:30.940 fused_ordering(565) 00:12:30.940 fused_ordering(566) 00:12:30.940 fused_ordering(567) 00:12:30.940 fused_ordering(568) 00:12:30.940 fused_ordering(569) 00:12:30.940 fused_ordering(570) 00:12:30.940 fused_ordering(571) 00:12:30.940 fused_ordering(572) 00:12:30.940 fused_ordering(573) 00:12:30.940 fused_ordering(574) 00:12:30.940 fused_ordering(575) 00:12:30.940 fused_ordering(576) 00:12:30.940 fused_ordering(577) 00:12:30.940 fused_ordering(578) 00:12:30.940 fused_ordering(579) 00:12:30.940 fused_ordering(580) 00:12:30.940 fused_ordering(581) 00:12:30.940 fused_ordering(582) 00:12:30.940 fused_ordering(583) 00:12:30.940 fused_ordering(584) 00:12:30.940 fused_ordering(585) 00:12:30.940 fused_ordering(586) 00:12:30.940 fused_ordering(587) 00:12:30.940 fused_ordering(588) 00:12:30.940 fused_ordering(589) 00:12:30.940 fused_ordering(590) 00:12:30.940 fused_ordering(591) 00:12:30.940 fused_ordering(592) 00:12:30.940 fused_ordering(593) 00:12:30.940 fused_ordering(594) 00:12:30.941 fused_ordering(595) 00:12:30.941 fused_ordering(596) 00:12:30.941 fused_ordering(597) 00:12:30.941 fused_ordering(598) 00:12:30.941 fused_ordering(599) 00:12:30.941 fused_ordering(600) 00:12:30.941 fused_ordering(601) 00:12:30.941 fused_ordering(602) 00:12:30.941 fused_ordering(603) 00:12:30.941 fused_ordering(604) 00:12:30.941 fused_ordering(605) 00:12:30.941 fused_ordering(606) 00:12:30.941 fused_ordering(607) 00:12:30.941 fused_ordering(608) 00:12:30.941 fused_ordering(609) 00:12:30.941 fused_ordering(610) 00:12:30.941 fused_ordering(611) 00:12:30.941 fused_ordering(612) 00:12:30.941 fused_ordering(613) 00:12:30.941 fused_ordering(614) 00:12:30.941 fused_ordering(615) 00:12:31.199 fused_ordering(616) 00:12:31.199 fused_ordering(617) 00:12:31.199 fused_ordering(618) 00:12:31.199 fused_ordering(619) 00:12:31.199 fused_ordering(620) 00:12:31.199 fused_ordering(621) 00:12:31.199 fused_ordering(622) 00:12:31.199 fused_ordering(623) 00:12:31.199 fused_ordering(624) 00:12:31.199 fused_ordering(625) 00:12:31.199 fused_ordering(626) 00:12:31.199 fused_ordering(627) 00:12:31.199 fused_ordering(628) 00:12:31.199 fused_ordering(629) 00:12:31.199 fused_ordering(630) 00:12:31.199 fused_ordering(631) 00:12:31.199 fused_ordering(632) 00:12:31.199 fused_ordering(633) 00:12:31.199 fused_ordering(634) 00:12:31.199 fused_ordering(635) 00:12:31.199 fused_ordering(636) 00:12:31.199 fused_ordering(637) 00:12:31.199 fused_ordering(638) 00:12:31.199 fused_ordering(639) 00:12:31.199 fused_ordering(640) 00:12:31.199 fused_ordering(641) 00:12:31.199 fused_ordering(642) 00:12:31.199 fused_ordering(643) 00:12:31.199 fused_ordering(644) 00:12:31.199 fused_ordering(645) 00:12:31.199 fused_ordering(646) 00:12:31.199 fused_ordering(647) 00:12:31.199 fused_ordering(648) 00:12:31.199 fused_ordering(649) 00:12:31.199 fused_ordering(650) 00:12:31.199 fused_ordering(651) 00:12:31.199 fused_ordering(652) 00:12:31.199 fused_ordering(653) 00:12:31.199 fused_ordering(654) 00:12:31.199 fused_ordering(655) 00:12:31.199 fused_ordering(656) 00:12:31.199 fused_ordering(657) 00:12:31.199 fused_ordering(658) 00:12:31.199 fused_ordering(659) 00:12:31.199 fused_ordering(660) 00:12:31.199 fused_ordering(661) 00:12:31.199 fused_ordering(662) 00:12:31.199 fused_ordering(663) 00:12:31.199 fused_ordering(664) 00:12:31.199 fused_ordering(665) 00:12:31.199 fused_ordering(666) 00:12:31.199 fused_ordering(667) 00:12:31.199 fused_ordering(668) 00:12:31.199 fused_ordering(669) 00:12:31.199 fused_ordering(670) 00:12:31.199 fused_ordering(671) 00:12:31.199 fused_ordering(672) 00:12:31.199 fused_ordering(673) 00:12:31.199 fused_ordering(674) 00:12:31.199 fused_ordering(675) 00:12:31.199 fused_ordering(676) 00:12:31.199 fused_ordering(677) 00:12:31.199 fused_ordering(678) 00:12:31.199 fused_ordering(679) 00:12:31.199 fused_ordering(680) 00:12:31.199 fused_ordering(681) 00:12:31.199 fused_ordering(682) 00:12:31.199 fused_ordering(683) 00:12:31.199 fused_ordering(684) 00:12:31.199 fused_ordering(685) 00:12:31.199 fused_ordering(686) 00:12:31.199 fused_ordering(687) 00:12:31.199 fused_ordering(688) 00:12:31.199 fused_ordering(689) 00:12:31.199 fused_ordering(690) 00:12:31.199 fused_ordering(691) 00:12:31.199 fused_ordering(692) 00:12:31.199 fused_ordering(693) 00:12:31.199 fused_ordering(694) 00:12:31.199 fused_ordering(695) 00:12:31.199 fused_ordering(696) 00:12:31.199 fused_ordering(697) 00:12:31.199 fused_ordering(698) 00:12:31.199 fused_ordering(699) 00:12:31.199 fused_ordering(700) 00:12:31.199 fused_ordering(701) 00:12:31.199 fused_ordering(702) 00:12:31.199 fused_ordering(703) 00:12:31.199 fused_ordering(704) 00:12:31.199 fused_ordering(705) 00:12:31.199 fused_ordering(706) 00:12:31.199 fused_ordering(707) 00:12:31.199 fused_ordering(708) 00:12:31.199 fused_ordering(709) 00:12:31.199 fused_ordering(710) 00:12:31.199 fused_ordering(711) 00:12:31.199 fused_ordering(712) 00:12:31.199 fused_ordering(713) 00:12:31.199 fused_ordering(714) 00:12:31.199 fused_ordering(715) 00:12:31.199 fused_ordering(716) 00:12:31.199 fused_ordering(717) 00:12:31.199 fused_ordering(718) 00:12:31.199 fused_ordering(719) 00:12:31.199 fused_ordering(720) 00:12:31.199 fused_ordering(721) 00:12:31.199 fused_ordering(722) 00:12:31.199 fused_ordering(723) 00:12:31.199 fused_ordering(724) 00:12:31.199 fused_ordering(725) 00:12:31.199 fused_ordering(726) 00:12:31.199 fused_ordering(727) 00:12:31.199 fused_ordering(728) 00:12:31.199 fused_ordering(729) 00:12:31.199 fused_ordering(730) 00:12:31.199 fused_ordering(731) 00:12:31.199 fused_ordering(732) 00:12:31.199 fused_ordering(733) 00:12:31.199 fused_ordering(734) 00:12:31.199 fused_ordering(735) 00:12:31.199 fused_ordering(736) 00:12:31.199 fused_ordering(737) 00:12:31.199 fused_ordering(738) 00:12:31.199 fused_ordering(739) 00:12:31.199 fused_ordering(740) 00:12:31.199 fused_ordering(741) 00:12:31.199 fused_ordering(742) 00:12:31.199 fused_ordering(743) 00:12:31.199 fused_ordering(744) 00:12:31.199 fused_ordering(745) 00:12:31.199 fused_ordering(746) 00:12:31.199 fused_ordering(747) 00:12:31.199 fused_ordering(748) 00:12:31.199 fused_ordering(749) 00:12:31.199 fused_ordering(750) 00:12:31.199 fused_ordering(751) 00:12:31.199 fused_ordering(752) 00:12:31.199 fused_ordering(753) 00:12:31.199 fused_ordering(754) 00:12:31.199 fused_ordering(755) 00:12:31.199 fused_ordering(756) 00:12:31.199 fused_ordering(757) 00:12:31.199 fused_ordering(758) 00:12:31.199 fused_ordering(759) 00:12:31.199 fused_ordering(760) 00:12:31.199 fused_ordering(761) 00:12:31.199 fused_ordering(762) 00:12:31.199 fused_ordering(763) 00:12:31.199 fused_ordering(764) 00:12:31.199 fused_ordering(765) 00:12:31.199 fused_ordering(766) 00:12:31.199 fused_ordering(767) 00:12:31.199 fused_ordering(768) 00:12:31.199 fused_ordering(769) 00:12:31.199 fused_ordering(770) 00:12:31.199 fused_ordering(771) 00:12:31.199 fused_ordering(772) 00:12:31.199 fused_ordering(773) 00:12:31.199 fused_ordering(774) 00:12:31.199 fused_ordering(775) 00:12:31.199 fused_ordering(776) 00:12:31.199 fused_ordering(777) 00:12:31.199 fused_ordering(778) 00:12:31.199 fused_ordering(779) 00:12:31.199 fused_ordering(780) 00:12:31.199 fused_ordering(781) 00:12:31.199 fused_ordering(782) 00:12:31.199 fused_ordering(783) 00:12:31.199 fused_ordering(784) 00:12:31.199 fused_ordering(785) 00:12:31.199 fused_ordering(786) 00:12:31.199 fused_ordering(787) 00:12:31.199 fused_ordering(788) 00:12:31.199 fused_ordering(789) 00:12:31.199 fused_ordering(790) 00:12:31.199 fused_ordering(791) 00:12:31.199 fused_ordering(792) 00:12:31.199 fused_ordering(793) 00:12:31.199 fused_ordering(794) 00:12:31.199 fused_ordering(795) 00:12:31.199 fused_ordering(796) 00:12:31.199 fused_ordering(797) 00:12:31.199 fused_ordering(798) 00:12:31.199 fused_ordering(799) 00:12:31.199 fused_ordering(800) 00:12:31.199 fused_ordering(801) 00:12:31.200 fused_ordering(802) 00:12:31.200 fused_ordering(803) 00:12:31.200 fused_ordering(804) 00:12:31.200 fused_ordering(805) 00:12:31.200 fused_ordering(806) 00:12:31.200 fused_ordering(807) 00:12:31.200 fused_ordering(808) 00:12:31.200 fused_ordering(809) 00:12:31.200 fused_ordering(810) 00:12:31.200 fused_ordering(811) 00:12:31.200 fused_ordering(812) 00:12:31.200 fused_ordering(813) 00:12:31.200 fused_ordering(814) 00:12:31.200 fused_ordering(815) 00:12:31.200 fused_ordering(816) 00:12:31.200 fused_ordering(817) 00:12:31.200 fused_ordering(818) 00:12:31.200 fused_ordering(819) 00:12:31.200 fused_ordering(820) 00:12:31.766 fused_ordering(821) 00:12:31.766 fused_ordering(822) 00:12:31.766 fused_ordering(823) 00:12:31.766 fused_ordering(824) 00:12:31.766 fused_ordering(825) 00:12:31.766 fused_ordering(826) 00:12:31.766 fused_ordering(827) 00:12:31.766 fused_ordering(828) 00:12:31.766 fused_ordering(829) 00:12:31.766 fused_ordering(830) 00:12:31.766 fused_ordering(831) 00:12:31.766 fused_ordering(832) 00:12:31.766 fused_ordering(833) 00:12:31.766 fused_ordering(834) 00:12:31.766 fused_ordering(835) 00:12:31.766 fused_ordering(836) 00:12:31.766 fused_ordering(837) 00:12:31.766 fused_ordering(838) 00:12:31.766 fused_ordering(839) 00:12:31.766 fused_ordering(840) 00:12:31.766 fused_ordering(841) 00:12:31.766 fused_ordering(842) 00:12:31.766 fused_ordering(843) 00:12:31.766 fused_ordering(844) 00:12:31.766 fused_ordering(845) 00:12:31.766 fused_ordering(846) 00:12:31.766 fused_ordering(847) 00:12:31.766 fused_ordering(848) 00:12:31.766 fused_ordering(849) 00:12:31.766 fused_ordering(850) 00:12:31.766 fused_ordering(851) 00:12:31.766 fused_ordering(852) 00:12:31.766 fused_ordering(853) 00:12:31.766 fused_ordering(854) 00:12:31.766 fused_ordering(855) 00:12:31.766 fused_ordering(856) 00:12:31.766 fused_ordering(857) 00:12:31.766 fused_ordering(858) 00:12:31.766 fused_ordering(859) 00:12:31.766 fused_ordering(860) 00:12:31.766 fused_ordering(861) 00:12:31.766 fused_ordering(862) 00:12:31.766 fused_ordering(863) 00:12:31.766 fused_ordering(864) 00:12:31.766 fused_ordering(865) 00:12:31.766 fused_ordering(866) 00:12:31.766 fused_ordering(867) 00:12:31.766 fused_ordering(868) 00:12:31.766 fused_ordering(869) 00:12:31.766 fused_ordering(870) 00:12:31.766 fused_ordering(871) 00:12:31.766 fused_ordering(872) 00:12:31.766 fused_ordering(873) 00:12:31.766 fused_ordering(874) 00:12:31.766 fused_ordering(875) 00:12:31.766 fused_ordering(876) 00:12:31.766 fused_ordering(877) 00:12:31.766 fused_ordering(878) 00:12:31.766 fused_ordering(879) 00:12:31.766 fused_ordering(880) 00:12:31.766 fused_ordering(881) 00:12:31.766 fused_ordering(882) 00:12:31.766 fused_ordering(883) 00:12:31.766 fused_ordering(884) 00:12:31.766 fused_ordering(885) 00:12:31.766 fused_ordering(886) 00:12:31.766 fused_ordering(887) 00:12:31.766 fused_ordering(888) 00:12:31.766 fused_ordering(889) 00:12:31.766 fused_ordering(890) 00:12:31.766 fused_ordering(891) 00:12:31.766 fused_ordering(892) 00:12:31.766 fused_ordering(893) 00:12:31.766 fused_ordering(894) 00:12:31.766 fused_ordering(895) 00:12:31.766 fused_ordering(896) 00:12:31.766 fused_ordering(897) 00:12:31.766 fused_ordering(898) 00:12:31.766 fused_ordering(899) 00:12:31.766 fused_ordering(900) 00:12:31.766 fused_ordering(901) 00:12:31.766 fused_ordering(902) 00:12:31.766 fused_ordering(903) 00:12:31.766 fused_ordering(904) 00:12:31.766 fused_ordering(905) 00:12:31.766 fused_ordering(906) 00:12:31.766 fused_ordering(907) 00:12:31.766 fused_ordering(908) 00:12:31.766 fused_ordering(909) 00:12:31.766 fused_ordering(910) 00:12:31.766 fused_ordering(911) 00:12:31.766 fused_ordering(912) 00:12:31.766 fused_ordering(913) 00:12:31.766 fused_ordering(914) 00:12:31.766 fused_ordering(915) 00:12:31.766 fused_ordering(916) 00:12:31.766 fused_ordering(917) 00:12:31.766 fused_ordering(918) 00:12:31.766 fused_ordering(919) 00:12:31.766 fused_ordering(920) 00:12:31.766 fused_ordering(921) 00:12:31.766 fused_ordering(922) 00:12:31.766 fused_ordering(923) 00:12:31.766 fused_ordering(924) 00:12:31.766 fused_ordering(925) 00:12:31.766 fused_ordering(926) 00:12:31.766 fused_ordering(927) 00:12:31.766 fused_ordering(928) 00:12:31.766 fused_ordering(929) 00:12:31.766 fused_ordering(930) 00:12:31.766 fused_ordering(931) 00:12:31.766 fused_ordering(932) 00:12:31.766 fused_ordering(933) 00:12:31.766 fused_ordering(934) 00:12:31.766 fused_ordering(935) 00:12:31.766 fused_ordering(936) 00:12:31.766 fused_ordering(937) 00:12:31.766 fused_ordering(938) 00:12:31.766 fused_ordering(939) 00:12:31.766 fused_ordering(940) 00:12:31.766 fused_ordering(941) 00:12:31.766 fused_ordering(942) 00:12:31.766 fused_ordering(943) 00:12:31.766 fused_ordering(944) 00:12:31.766 fused_ordering(945) 00:12:31.766 fused_ordering(946) 00:12:31.766 fused_ordering(947) 00:12:31.766 fused_ordering(948) 00:12:31.766 fused_ordering(949) 00:12:31.766 fused_ordering(950) 00:12:31.766 fused_ordering(951) 00:12:31.766 fused_ordering(952) 00:12:31.766 fused_ordering(953) 00:12:31.766 fused_ordering(954) 00:12:31.766 fused_ordering(955) 00:12:31.766 fused_ordering(956) 00:12:31.766 fused_ordering(957) 00:12:31.766 fused_ordering(958) 00:12:31.766 fused_ordering(959) 00:12:31.766 fused_ordering(960) 00:12:31.766 fused_ordering(961) 00:12:31.766 fused_ordering(962) 00:12:31.766 fused_ordering(963) 00:12:31.766 fused_ordering(964) 00:12:31.766 fused_ordering(965) 00:12:31.766 fused_ordering(966) 00:12:31.766 fused_ordering(967) 00:12:31.766 fused_ordering(968) 00:12:31.766 fused_ordering(969) 00:12:31.766 fused_ordering(970) 00:12:31.766 fused_ordering(971) 00:12:31.766 fused_ordering(972) 00:12:31.766 fused_ordering(973) 00:12:31.766 fused_ordering(974) 00:12:31.766 fused_ordering(975) 00:12:31.766 fused_ordering(976) 00:12:31.766 fused_ordering(977) 00:12:31.766 fused_ordering(978) 00:12:31.766 fused_ordering(979) 00:12:31.766 fused_ordering(980) 00:12:31.766 fused_ordering(981) 00:12:31.766 fused_ordering(982) 00:12:31.766 fused_ordering(983) 00:12:31.766 fused_ordering(984) 00:12:31.766 fused_ordering(985) 00:12:31.766 fused_ordering(986) 00:12:31.766 fused_ordering(987) 00:12:31.766 fused_ordering(988) 00:12:31.766 fused_ordering(989) 00:12:31.766 fused_ordering(990) 00:12:31.766 fused_ordering(991) 00:12:31.766 fused_ordering(992) 00:12:31.766 fused_ordering(993) 00:12:31.766 fused_ordering(994) 00:12:31.766 fused_ordering(995) 00:12:31.766 fused_ordering(996) 00:12:31.766 fused_ordering(997) 00:12:31.766 fused_ordering(998) 00:12:31.766 fused_ordering(999) 00:12:31.766 fused_ordering(1000) 00:12:31.766 fused_ordering(1001) 00:12:31.766 fused_ordering(1002) 00:12:31.766 fused_ordering(1003) 00:12:31.766 fused_ordering(1004) 00:12:31.766 fused_ordering(1005) 00:12:31.766 fused_ordering(1006) 00:12:31.766 fused_ordering(1007) 00:12:31.766 fused_ordering(1008) 00:12:31.766 fused_ordering(1009) 00:12:31.766 fused_ordering(1010) 00:12:31.766 fused_ordering(1011) 00:12:31.766 fused_ordering(1012) 00:12:31.766 fused_ordering(1013) 00:12:31.766 fused_ordering(1014) 00:12:31.766 fused_ordering(1015) 00:12:31.766 fused_ordering(1016) 00:12:31.766 fused_ordering(1017) 00:12:31.766 fused_ordering(1018) 00:12:31.766 fused_ordering(1019) 00:12:31.766 fused_ordering(1020) 00:12:31.766 fused_ordering(1021) 00:12:31.766 fused_ordering(1022) 00:12:31.766 fused_ordering(1023) 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.025 rmmod nvme_tcp 00:12:32.025 rmmod nvme_fabrics 00:12:32.025 rmmod nvme_keyring 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 577121 ']' 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 577121 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 577121 ']' 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 577121 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 577121 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 577121' 00:12:32.025 killing process with pid 577121 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 577121 00:12:32.025 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 577121 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.285 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.190 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:34.190 00:12:34.190 real 0m7.568s 00:12:34.190 user 0m4.668s 00:12:34.190 sys 0m3.295s 00:12:34.190 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:34.190 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:34.190 ************************************ 00:12:34.190 END TEST nvmf_fused_ordering 00:12:34.190 ************************************ 00:12:34.190 12:24:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:34.190 12:24:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:34.190 12:24:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:34.190 12:24:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.190 ************************************ 00:12:34.190 START TEST nvmf_ns_masking 00:12:34.190 ************************************ 00:12:34.190 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:34.451 * Looking for test storage... 00:12:34.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.451 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:34.451 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:12:34.451 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:34.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.451 --rc genhtml_branch_coverage=1 00:12:34.451 --rc genhtml_function_coverage=1 00:12:34.451 --rc genhtml_legend=1 00:12:34.451 --rc geninfo_all_blocks=1 00:12:34.451 --rc geninfo_unexecuted_blocks=1 00:12:34.451 00:12:34.451 ' 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:34.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.451 --rc genhtml_branch_coverage=1 00:12:34.451 --rc genhtml_function_coverage=1 00:12:34.451 --rc genhtml_legend=1 00:12:34.451 --rc geninfo_all_blocks=1 00:12:34.451 --rc geninfo_unexecuted_blocks=1 00:12:34.451 00:12:34.451 ' 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:34.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.451 --rc genhtml_branch_coverage=1 00:12:34.451 --rc genhtml_function_coverage=1 00:12:34.451 --rc genhtml_legend=1 00:12:34.451 --rc geninfo_all_blocks=1 00:12:34.451 --rc geninfo_unexecuted_blocks=1 00:12:34.451 00:12:34.451 ' 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:34.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.451 --rc genhtml_branch_coverage=1 00:12:34.451 --rc genhtml_function_coverage=1 00:12:34.451 --rc genhtml_legend=1 00:12:34.451 --rc geninfo_all_blocks=1 00:12:34.451 --rc geninfo_unexecuted_blocks=1 00:12:34.451 00:12:34.451 ' 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.451 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=5f06f1ea-4f93-4245-aee5-1f8df440daf5 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=dbdc106f-2cf3-4c57-9905-4a61a21faa00 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7a860e5b-812a-471b-872c-1a93fedec797 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.452 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:36.984 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:36.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:36.984 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:36.984 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:12:36.984 00:12:36.984 --- 10.0.0.2 ping statistics --- 00:12:36.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.984 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:36.984 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:12:36.985 00:12:36.985 --- 10.0.0.1 ping statistics --- 00:12:36.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.985 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=579980 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 579980 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 579980 ']' 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:36.985 [2024-10-30 12:24:09.349006] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:12:36.985 [2024-10-30 12:24:09.349087] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.985 [2024-10-30 12:24:09.421422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.985 [2024-10-30 12:24:09.480612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.985 [2024-10-30 12:24:09.480663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.985 [2024-10-30 12:24:09.480676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.985 [2024-10-30 12:24:09.480687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.985 [2024-10-30 12:24:09.480698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.985 [2024-10-30 12:24:09.481253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.985 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:37.242 [2024-10-30 12:24:09.864788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.242 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:37.242 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:37.242 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:37.500 Malloc1 00:12:37.500 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:38.065 Malloc2 00:12:38.066 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:38.324 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:38.586 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.899 [2024-10-30 12:24:11.299333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.899 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:38.899 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7a860e5b-812a-471b-872c-1a93fedec797 -a 10.0.0.2 -s 4420 -i 4 00:12:38.899 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.899 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:38.900 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.900 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:38.900 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:40.868 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:40.868 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:40.868 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.868 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:40.868 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.868 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:40.868 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:40.868 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:41.126 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:41.126 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:41.126 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:41.126 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:41.126 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:41.126 [ 0]:0x1 00:12:41.126 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:41.126 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:41.126 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01cd6a45d3bf48b7ae9e1f6145a034a3 00:12:41.126 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01cd6a45d3bf48b7ae9e1f6145a034a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.126 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:41.384 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:41.384 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:41.384 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:41.384 [ 0]:0x1 00:12:41.384 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:41.384 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:41.384 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01cd6a45d3bf48b7ae9e1f6145a034a3 00:12:41.384 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01cd6a45d3bf48b7ae9e1f6145a034a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.384 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:41.384 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:41.384 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:41.384 [ 1]:0x2 00:12:41.384 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:41.384 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:41.643 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199976bd2cb1441094adc77388631514 00:12:41.643 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199976bd2cb1441094adc77388631514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.643 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:41.643 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.643 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.900 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:42.158 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:42.158 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7a860e5b-812a-471b-872c-1a93fedec797 -a 10.0.0.2 -s 4420 -i 4 00:12:42.417 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:42.417 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:42.417 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.417 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:12:42.417 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:12:42.417 12:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:44.316 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.317 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:44.317 [ 0]:0x2 00:12:44.317 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:44.317 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.575 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199976bd2cb1441094adc77388631514 00:12:44.575 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199976bd2cb1441094adc77388631514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.575 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:44.832 [ 0]:0x1 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01cd6a45d3bf48b7ae9e1f6145a034a3 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01cd6a45d3bf48b7ae9e1f6145a034a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:44.832 [ 1]:0x2 00:12:44.832 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:44.833 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.833 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199976bd2cb1441094adc77388631514 00:12:44.833 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199976bd2cb1441094adc77388631514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.833 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:45.091 [ 0]:0x2 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199976bd2cb1441094adc77388631514 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199976bd2cb1441094adc77388631514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:45.091 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.377 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:45.377 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:45.377 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7a860e5b-812a-471b-872c-1a93fedec797 -a 10.0.0.2 -s 4420 -i 4 00:12:45.635 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:45.635 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:45.635 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.635 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:12:45.635 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:12:45.635 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:47.531 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:47.531 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:47.531 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.531 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:12:47.531 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.531 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:47.531 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:47.531 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:47.787 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:47.787 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:47.787 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:47.787 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.787 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:47.787 [ 0]:0x1 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=01cd6a45d3bf48b7ae9e1f6145a034a3 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 01cd6a45d3bf48b7ae9e1f6145a034a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:47.788 [ 1]:0x2 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199976bd2cb1441094adc77388631514 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199976bd2cb1441094adc77388631514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.788 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.044 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:48.301 [ 0]:0x2 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199976bd2cb1441094adc77388631514 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199976bd2cb1441094adc77388631514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:48.301 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:48.558 [2024-10-30 12:24:21.117909] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:48.558 request: 00:12:48.558 { 00:12:48.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:48.558 "nsid": 2, 00:12:48.558 "host": "nqn.2016-06.io.spdk:host1", 00:12:48.558 "method": "nvmf_ns_remove_host", 00:12:48.558 "req_id": 1 00:12:48.558 } 00:12:48.558 Got JSON-RPC error response 00:12:48.558 response: 00:12:48.558 { 00:12:48.558 "code": -32602, 00:12:48.558 "message": "Invalid parameters" 00:12:48.558 } 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.558 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:48.816 [ 0]:0x2 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=199976bd2cb1441094adc77388631514 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 199976bd2cb1441094adc77388631514 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=581508 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 581508 /var/tmp/host.sock 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 581508 ']' 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:48.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:48.816 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:48.816 [2024-10-30 12:24:21.474888] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:12:48.816 [2024-10-30 12:24:21.474978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581508 ] 00:12:49.075 [2024-10-30 12:24:21.544746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.075 [2024-10-30 12:24:21.604577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.333 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:49.333 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:49.333 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.591 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:49.849 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 5f06f1ea-4f93-4245-aee5-1f8df440daf5 00:12:49.849 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:49.849 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5F06F1EA4F934245AEE51F8DF440DAF5 -i 00:12:50.416 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid dbdc106f-2cf3-4c57-9905-4a61a21faa00 00:12:50.416 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:50.416 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DBDC106F2CF34C5799054A61A21FAA00 -i 00:12:50.416 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:50.673 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:50.931 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:50.931 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:51.496 nvme0n1 00:12:51.496 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:51.497 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:52.062 nvme1n2 00:12:52.062 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:52.062 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:52.062 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:52.062 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:52.062 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:52.320 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:52.320 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:52.320 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:52.320 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:52.577 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 5f06f1ea-4f93-4245-aee5-1f8df440daf5 == \5\f\0\6\f\1\e\a\-\4\f\9\3\-\4\2\4\5\-\a\e\e\5\-\1\f\8\d\f\4\4\0\d\a\f\5 ]] 00:12:52.578 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:52.578 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:52.578 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:52.834 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ dbdc106f-2cf3-4c57-9905-4a61a21faa00 == \d\b\d\c\1\0\6\f\-\2\c\f\3\-\4\c\5\7\-\9\9\0\5\-\4\a\6\1\a\2\1\f\a\a\0\0 ]] 00:12:52.835 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.092 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 5f06f1ea-4f93-4245-aee5-1f8df440daf5 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5F06F1EA4F934245AEE51F8DF440DAF5 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5F06F1EA4F934245AEE51F8DF440DAF5 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:53.349 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5F06F1EA4F934245AEE51F8DF440DAF5 00:12:53.607 [2024-10-30 12:24:26.128382] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:53.607 [2024-10-30 12:24:26.128421] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:53.607 [2024-10-30 12:24:26.128451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.607 request: 00:12:53.607 { 00:12:53.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.607 "namespace": { 00:12:53.607 "bdev_name": "invalid", 00:12:53.607 "nsid": 1, 00:12:53.607 "nguid": "5F06F1EA4F934245AEE51F8DF440DAF5", 00:12:53.607 "no_auto_visible": false 00:12:53.607 }, 00:12:53.607 "method": "nvmf_subsystem_add_ns", 00:12:53.607 "req_id": 1 00:12:53.607 } 00:12:53.607 Got JSON-RPC error response 00:12:53.607 response: 00:12:53.607 { 00:12:53.607 "code": -32602, 00:12:53.607 "message": "Invalid parameters" 00:12:53.607 } 00:12:53.607 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:53.607 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:53.607 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:53.607 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:53.607 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 5f06f1ea-4f93-4245-aee5-1f8df440daf5 00:12:53.607 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:53.607 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5F06F1EA4F934245AEE51F8DF440DAF5 -i 00:12:53.864 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:55.763 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:55.763 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:55.763 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:56.020 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:56.020 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 581508 00:12:56.021 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 581508 ']' 00:12:56.021 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 581508 00:12:56.021 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:12:56.021 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:56.021 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 581508 00:12:56.279 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:56.279 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:56.279 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 581508' 00:12:56.279 killing process with pid 581508 00:12:56.279 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 581508 00:12:56.279 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 581508 00:12:56.537 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.102 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:57.102 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:57.102 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.102 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:57.102 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.102 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:57.102 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.102 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.102 rmmod nvme_tcp 00:12:57.102 rmmod nvme_fabrics 00:12:57.102 rmmod nvme_keyring 00:12:57.102 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.102 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 579980 ']' 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 579980 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 579980 ']' 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 579980 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 579980 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 579980' 00:12:57.103 killing process with pid 579980 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 579980 00:12:57.103 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 579980 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.362 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.270 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.270 00:12:59.270 real 0m25.005s 00:12:59.270 user 0m36.377s 00:12:59.270 sys 0m4.671s 00:12:59.270 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:59.270 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:59.270 ************************************ 00:12:59.270 END TEST nvmf_ns_masking 00:12:59.270 ************************************ 00:12:59.270 12:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:59.270 12:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:59.270 12:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:59.270 12:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:59.270 12:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.270 ************************************ 00:12:59.270 START TEST nvmf_nvme_cli 00:12:59.270 ************************************ 00:12:59.270 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:59.529 * Looking for test storage... 00:12:59.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.529 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:59.529 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:12:59.529 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.529 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:59.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.530 --rc genhtml_branch_coverage=1 00:12:59.530 --rc genhtml_function_coverage=1 00:12:59.530 --rc genhtml_legend=1 00:12:59.530 --rc geninfo_all_blocks=1 00:12:59.530 --rc geninfo_unexecuted_blocks=1 00:12:59.530 00:12:59.530 ' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:59.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.530 --rc genhtml_branch_coverage=1 00:12:59.530 --rc genhtml_function_coverage=1 00:12:59.530 --rc genhtml_legend=1 00:12:59.530 --rc geninfo_all_blocks=1 00:12:59.530 --rc geninfo_unexecuted_blocks=1 00:12:59.530 00:12:59.530 ' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:59.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.530 --rc genhtml_branch_coverage=1 00:12:59.530 --rc genhtml_function_coverage=1 00:12:59.530 --rc genhtml_legend=1 00:12:59.530 --rc geninfo_all_blocks=1 00:12:59.530 --rc geninfo_unexecuted_blocks=1 00:12:59.530 00:12:59.530 ' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:59.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.530 --rc genhtml_branch_coverage=1 00:12:59.530 --rc genhtml_function_coverage=1 00:12:59.530 --rc genhtml_legend=1 00:12:59.530 --rc geninfo_all_blocks=1 00:12:59.530 --rc geninfo_unexecuted_blocks=1 00:12:59.530 00:12:59.530 ' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.530 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:02.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:02.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:02.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:02.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:02.065 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:02.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:13:02.066 00:13:02.066 --- 10.0.0.2 ping statistics --- 00:13:02.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.066 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:13:02.066 00:13:02.066 --- 10.0.0.1 ping statistics --- 00:13:02.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.066 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=584529 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 584529 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 584529 ']' 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.066 [2024-10-30 12:24:34.359362] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:13:02.066 [2024-10-30 12:24:34.359451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.066 [2024-10-30 12:24:34.431943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.066 [2024-10-30 12:24:34.494452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.066 [2024-10-30 12:24:34.494506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.066 [2024-10-30 12:24:34.494536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.066 [2024-10-30 12:24:34.494547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.066 [2024-10-30 12:24:34.494557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.066 [2024-10-30 12:24:34.496155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.066 [2024-10-30 12:24:34.496217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.066 [2024-10-30 12:24:34.496283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.066 [2024-10-30 12:24:34.496287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.066 [2024-10-30 12:24:34.653050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.066 Malloc0 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.066 Malloc1 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.066 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.324 [2024-10-30 12:24:34.752897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:02.324 00:13:02.324 Discovery Log Number of Records 2, Generation counter 2 00:13:02.324 =====Discovery Log Entry 0====== 00:13:02.324 trtype: tcp 00:13:02.324 adrfam: ipv4 00:13:02.324 subtype: current discovery subsystem 00:13:02.324 treq: not required 00:13:02.324 portid: 0 00:13:02.324 trsvcid: 4420 00:13:02.324 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:02.324 traddr: 10.0.0.2 00:13:02.324 eflags: explicit discovery connections, duplicate discovery information 00:13:02.324 sectype: none 00:13:02.324 =====Discovery Log Entry 1====== 00:13:02.324 trtype: tcp 00:13:02.324 adrfam: ipv4 00:13:02.324 subtype: nvme subsystem 00:13:02.324 treq: not required 00:13:02.324 portid: 0 00:13:02.324 trsvcid: 4420 00:13:02.324 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:02.324 traddr: 10.0.0.2 00:13:02.324 eflags: none 00:13:02.324 sectype: none 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:02.324 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.325 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:02.325 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:02.325 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.325 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:02.325 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.325 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:02.325 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.890 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:02.890 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:13:02.890 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.890 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:13:02.890 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:13:02.890 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:05.417 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:05.418 /dev/nvme0n2 ]] 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:05.418 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.676 rmmod nvme_tcp 00:13:05.676 rmmod nvme_fabrics 00:13:05.676 rmmod nvme_keyring 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 584529 ']' 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 584529 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 584529 ']' 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 584529 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 584529 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 584529' 00:13:05.676 killing process with pid 584529 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 584529 00:13:05.676 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 584529 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.935 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:08.474 00:13:08.474 real 0m8.636s 00:13:08.474 user 0m16.602s 00:13:08.474 sys 0m2.262s 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:08.474 ************************************ 00:13:08.474 END TEST nvmf_nvme_cli 00:13:08.474 ************************************ 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.474 ************************************ 00:13:08.474 START TEST nvmf_vfio_user 00:13:08.474 ************************************ 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:08.474 * Looking for test storage... 00:13:08.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:08.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.474 --rc genhtml_branch_coverage=1 00:13:08.474 --rc genhtml_function_coverage=1 00:13:08.474 --rc genhtml_legend=1 00:13:08.474 --rc geninfo_all_blocks=1 00:13:08.474 --rc geninfo_unexecuted_blocks=1 00:13:08.474 00:13:08.474 ' 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:08.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.474 --rc genhtml_branch_coverage=1 00:13:08.474 --rc genhtml_function_coverage=1 00:13:08.474 --rc genhtml_legend=1 00:13:08.474 --rc geninfo_all_blocks=1 00:13:08.474 --rc geninfo_unexecuted_blocks=1 00:13:08.474 00:13:08.474 ' 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:08.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.474 --rc genhtml_branch_coverage=1 00:13:08.474 --rc genhtml_function_coverage=1 00:13:08.474 --rc genhtml_legend=1 00:13:08.474 --rc geninfo_all_blocks=1 00:13:08.474 --rc geninfo_unexecuted_blocks=1 00:13:08.474 00:13:08.474 ' 00:13:08.474 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:08.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.475 --rc genhtml_branch_coverage=1 00:13:08.475 --rc genhtml_function_coverage=1 00:13:08.475 --rc genhtml_legend=1 00:13:08.475 --rc geninfo_all_blocks=1 00:13:08.475 --rc geninfo_unexecuted_blocks=1 00:13:08.475 00:13:08.475 ' 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:08.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=585453 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 585453' 00:13:08.475 Process pid: 585453 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 585453 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 585453 ']' 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:08.475 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:08.475 [2024-10-30 12:24:40.829343] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:13:08.475 [2024-10-30 12:24:40.829453] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.475 [2024-10-30 12:24:40.901507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.475 [2024-10-30 12:24:40.961736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.476 [2024-10-30 12:24:40.961792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.476 [2024-10-30 12:24:40.961821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.476 [2024-10-30 12:24:40.961832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.476 [2024-10-30 12:24:40.961842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.476 [2024-10-30 12:24:40.963483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.476 [2024-10-30 12:24:40.963510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.476 [2024-10-30 12:24:40.963580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.476 [2024-10-30 12:24:40.963584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.476 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:08.476 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:13:08.476 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:09.408 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:09.974 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:09.974 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:09.974 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:09.974 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:09.974 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:09.974 Malloc1 00:13:10.244 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:10.244 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:10.810 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:10.810 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:10.810 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:10.810 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:11.068 Malloc2 00:13:11.068 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:11.634 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:11.634 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:11.891 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:11.891 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:11.892 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:11.892 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:11.892 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:11.892 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:12.151 [2024-10-30 12:24:44.582711] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:13:12.151 [2024-10-30 12:24:44.582754] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid585883 ] 00:13:12.151 [2024-10-30 12:24:44.634344] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:12.151 [2024-10-30 12:24:44.644744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:12.151 [2024-10-30 12:24:44.644773] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8065890000 00:13:12.151 [2024-10-30 12:24:44.645740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.151 [2024-10-30 12:24:44.646736] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.151 [2024-10-30 12:24:44.647739] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.151 [2024-10-30 12:24:44.648741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:12.151 [2024-10-30 12:24:44.649747] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:12.151 [2024-10-30 12:24:44.650754] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.151 [2024-10-30 12:24:44.651754] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:12.151 [2024-10-30 12:24:44.652761] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.151 [2024-10-30 12:24:44.653765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:12.152 [2024-10-30 12:24:44.653789] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8065885000 00:13:12.152 [2024-10-30 12:24:44.654910] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:12.152 [2024-10-30 12:24:44.668947] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:12.152 [2024-10-30 12:24:44.668987] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:12.152 [2024-10-30 12:24:44.677905] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:12.152 [2024-10-30 12:24:44.677957] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:12.152 [2024-10-30 12:24:44.678046] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:12.152 [2024-10-30 12:24:44.678076] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:12.152 [2024-10-30 12:24:44.678086] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:12.152 [2024-10-30 12:24:44.678892] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:12.152 [2024-10-30 12:24:44.678911] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:12.152 [2024-10-30 12:24:44.678924] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:12.152 [2024-10-30 12:24:44.679894] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:12.152 [2024-10-30 12:24:44.679912] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:12.152 [2024-10-30 12:24:44.679925] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:12.152 [2024-10-30 12:24:44.680903] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:12.152 [2024-10-30 12:24:44.680921] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:12.152 [2024-10-30 12:24:44.681909] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:12.152 [2024-10-30 12:24:44.681929] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:12.152 [2024-10-30 12:24:44.681938] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:12.152 [2024-10-30 12:24:44.681950] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:12.152 [2024-10-30 12:24:44.682059] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:12.152 [2024-10-30 12:24:44.682067] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:12.152 [2024-10-30 12:24:44.682076] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:12.152 [2024-10-30 12:24:44.682921] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:12.152 [2024-10-30 12:24:44.683919] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:12.152 [2024-10-30 12:24:44.684925] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:12.152 [2024-10-30 12:24:44.685919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:12.152 [2024-10-30 12:24:44.686033] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:12.152 [2024-10-30 12:24:44.686935] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:12.152 [2024-10-30 12:24:44.686953] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:12.152 [2024-10-30 12:24:44.686962] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:12.152 [2024-10-30 12:24:44.686986] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:12.152 [2024-10-30 12:24:44.687002] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:12.152 [2024-10-30 12:24:44.687026] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:12.152 [2024-10-30 12:24:44.687036] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.152 [2024-10-30 12:24:44.687042] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.152 [2024-10-30 12:24:44.687061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.152 [2024-10-30 12:24:44.687117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:12.152 [2024-10-30 12:24:44.687133] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:12.152 [2024-10-30 12:24:44.687141] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:12.152 [2024-10-30 12:24:44.687148] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:12.152 [2024-10-30 12:24:44.687156] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:12.152 [2024-10-30 12:24:44.687163] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:12.152 [2024-10-30 12:24:44.687170] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:12.152 [2024-10-30 12:24:44.687178] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:12.152 [2024-10-30 12:24:44.687189] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:12.152 [2024-10-30 12:24:44.687203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:12.152 [2024-10-30 12:24:44.687220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:12.152 [2024-10-30 12:24:44.687262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.152 [2024-10-30 12:24:44.687280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.152 [2024-10-30 12:24:44.687293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.152 [2024-10-30 12:24:44.687305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.152 [2024-10-30 12:24:44.687314] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:12.152 [2024-10-30 12:24:44.687330] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:12.152 [2024-10-30 12:24:44.687345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:12.152 [2024-10-30 12:24:44.687357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:12.152 [2024-10-30 12:24:44.687373] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:12.152 [2024-10-30 12:24:44.687383] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:12.152 [2024-10-30 12:24:44.687394] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:12.152 [2024-10-30 12:24:44.687404] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:12.152 [2024-10-30 12:24:44.687418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:12.152 [2024-10-30 12:24:44.687430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:12.152 [2024-10-30 12:24:44.687498] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:12.152 [2024-10-30 12:24:44.687514] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687528] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:12.153 [2024-10-30 12:24:44.687537] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:12.153 [2024-10-30 12:24:44.687543] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.153 [2024-10-30 12:24:44.687568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:12.153 [2024-10-30 12:24:44.687580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:12.153 [2024-10-30 12:24:44.687597] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:12.153 [2024-10-30 12:24:44.687670] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687687] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687699] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:12.153 [2024-10-30 12:24:44.687707] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.153 [2024-10-30 12:24:44.687713] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.153 [2024-10-30 12:24:44.687722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.153 [2024-10-30 12:24:44.687747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:12.153 [2024-10-30 12:24:44.687768] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687782] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687797] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:12.153 [2024-10-30 12:24:44.687806] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.153 [2024-10-30 12:24:44.687812] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.153 [2024-10-30 12:24:44.687821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.153 [2024-10-30 12:24:44.687835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:12.153 [2024-10-30 12:24:44.687849] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687860] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687874] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687884] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687892] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687901] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687909] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:12.153 [2024-10-30 12:24:44.687917] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:12.153 [2024-10-30 12:24:44.687925] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:12.153 [2024-10-30 12:24:44.687950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:12.153 [2024-10-30 12:24:44.687968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:12.153 [2024-10-30 12:24:44.687987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:12.153 [2024-10-30 12:24:44.687998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:12.153 [2024-10-30 12:24:44.688014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:12.153 [2024-10-30 12:24:44.688024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:12.153 [2024-10-30 12:24:44.688040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:12.153 [2024-10-30 12:24:44.688050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:12.153 [2024-10-30 12:24:44.688072] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:12.153 [2024-10-30 12:24:44.688081] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:12.153 [2024-10-30 12:24:44.688087] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:12.153 [2024-10-30 12:24:44.688093] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:12.153 [2024-10-30 12:24:44.688101] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:12.153 [2024-10-30 12:24:44.688111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:12.153 [2024-10-30 12:24:44.688123] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:12.153 [2024-10-30 12:24:44.688131] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:12.153 [2024-10-30 12:24:44.688136] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.153 [2024-10-30 12:24:44.688145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:12.153 [2024-10-30 12:24:44.688156] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:12.153 [2024-10-30 12:24:44.688164] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.153 [2024-10-30 12:24:44.688170] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.153 [2024-10-30 12:24:44.688178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.153 [2024-10-30 12:24:44.688194] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:12.153 [2024-10-30 12:24:44.688203] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:12.153 [2024-10-30 12:24:44.688209] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.153 [2024-10-30 12:24:44.688217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:12.153 [2024-10-30 12:24:44.688229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:12.153 [2024-10-30 12:24:44.688275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:12.153 [2024-10-30 12:24:44.688297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:12.153 [2024-10-30 12:24:44.688310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:12.153 ===================================================== 00:13:12.153 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:12.153 ===================================================== 00:13:12.153 Controller Capabilities/Features 00:13:12.153 ================================ 00:13:12.153 Vendor ID: 4e58 00:13:12.153 Subsystem Vendor ID: 4e58 00:13:12.153 Serial Number: SPDK1 00:13:12.153 Model Number: SPDK bdev Controller 00:13:12.153 Firmware Version: 25.01 00:13:12.153 Recommended Arb Burst: 6 00:13:12.153 IEEE OUI Identifier: 8d 6b 50 00:13:12.153 Multi-path I/O 00:13:12.153 May have multiple subsystem ports: Yes 00:13:12.153 May have multiple controllers: Yes 00:13:12.153 Associated with SR-IOV VF: No 00:13:12.153 Max Data Transfer Size: 131072 00:13:12.153 Max Number of Namespaces: 32 00:13:12.153 Max Number of I/O Queues: 127 00:13:12.153 NVMe Specification Version (VS): 1.3 00:13:12.153 NVMe Specification Version (Identify): 1.3 00:13:12.153 Maximum Queue Entries: 256 00:13:12.153 Contiguous Queues Required: Yes 00:13:12.153 Arbitration Mechanisms Supported 00:13:12.153 Weighted Round Robin: Not Supported 00:13:12.153 Vendor Specific: Not Supported 00:13:12.153 Reset Timeout: 15000 ms 00:13:12.153 Doorbell Stride: 4 bytes 00:13:12.153 NVM Subsystem Reset: Not Supported 00:13:12.153 Command Sets Supported 00:13:12.154 NVM Command Set: Supported 00:13:12.154 Boot Partition: Not Supported 00:13:12.154 Memory Page Size Minimum: 4096 bytes 00:13:12.154 Memory Page Size Maximum: 4096 bytes 00:13:12.154 Persistent Memory Region: Not Supported 00:13:12.154 Optional Asynchronous Events Supported 00:13:12.154 Namespace Attribute Notices: Supported 00:13:12.154 Firmware Activation Notices: Not Supported 00:13:12.154 ANA Change Notices: Not Supported 00:13:12.154 PLE Aggregate Log Change Notices: Not Supported 00:13:12.154 LBA Status Info Alert Notices: Not Supported 00:13:12.154 EGE Aggregate Log Change Notices: Not Supported 00:13:12.154 Normal NVM Subsystem Shutdown event: Not Supported 00:13:12.154 Zone Descriptor Change Notices: Not Supported 00:13:12.154 Discovery Log Change Notices: Not Supported 00:13:12.154 Controller Attributes 00:13:12.154 128-bit Host Identifier: Supported 00:13:12.154 Non-Operational Permissive Mode: Not Supported 00:13:12.154 NVM Sets: Not Supported 00:13:12.154 Read Recovery Levels: Not Supported 00:13:12.154 Endurance Groups: Not Supported 00:13:12.154 Predictable Latency Mode: Not Supported 00:13:12.154 Traffic Based Keep ALive: Not Supported 00:13:12.154 Namespace Granularity: Not Supported 00:13:12.154 SQ Associations: Not Supported 00:13:12.154 UUID List: Not Supported 00:13:12.154 Multi-Domain Subsystem: Not Supported 00:13:12.154 Fixed Capacity Management: Not Supported 00:13:12.154 Variable Capacity Management: Not Supported 00:13:12.154 Delete Endurance Group: Not Supported 00:13:12.154 Delete NVM Set: Not Supported 00:13:12.154 Extended LBA Formats Supported: Not Supported 00:13:12.154 Flexible Data Placement Supported: Not Supported 00:13:12.154 00:13:12.154 Controller Memory Buffer Support 00:13:12.154 ================================ 00:13:12.154 Supported: No 00:13:12.154 00:13:12.154 Persistent Memory Region Support 00:13:12.154 ================================ 00:13:12.154 Supported: No 00:13:12.154 00:13:12.154 Admin Command Set Attributes 00:13:12.154 ============================ 00:13:12.154 Security Send/Receive: Not Supported 00:13:12.154 Format NVM: Not Supported 00:13:12.154 Firmware Activate/Download: Not Supported 00:13:12.154 Namespace Management: Not Supported 00:13:12.154 Device Self-Test: Not Supported 00:13:12.154 Directives: Not Supported 00:13:12.154 NVMe-MI: Not Supported 00:13:12.154 Virtualization Management: Not Supported 00:13:12.154 Doorbell Buffer Config: Not Supported 00:13:12.154 Get LBA Status Capability: Not Supported 00:13:12.154 Command & Feature Lockdown Capability: Not Supported 00:13:12.154 Abort Command Limit: 4 00:13:12.154 Async Event Request Limit: 4 00:13:12.154 Number of Firmware Slots: N/A 00:13:12.154 Firmware Slot 1 Read-Only: N/A 00:13:12.154 Firmware Activation Without Reset: N/A 00:13:12.154 Multiple Update Detection Support: N/A 00:13:12.154 Firmware Update Granularity: No Information Provided 00:13:12.154 Per-Namespace SMART Log: No 00:13:12.154 Asymmetric Namespace Access Log Page: Not Supported 00:13:12.154 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:12.154 Command Effects Log Page: Supported 00:13:12.154 Get Log Page Extended Data: Supported 00:13:12.154 Telemetry Log Pages: Not Supported 00:13:12.154 Persistent Event Log Pages: Not Supported 00:13:12.154 Supported Log Pages Log Page: May Support 00:13:12.154 Commands Supported & Effects Log Page: Not Supported 00:13:12.154 Feature Identifiers & Effects Log Page:May Support 00:13:12.154 NVMe-MI Commands & Effects Log Page: May Support 00:13:12.154 Data Area 4 for Telemetry Log: Not Supported 00:13:12.154 Error Log Page Entries Supported: 128 00:13:12.154 Keep Alive: Supported 00:13:12.154 Keep Alive Granularity: 10000 ms 00:13:12.154 00:13:12.154 NVM Command Set Attributes 00:13:12.154 ========================== 00:13:12.154 Submission Queue Entry Size 00:13:12.154 Max: 64 00:13:12.154 Min: 64 00:13:12.154 Completion Queue Entry Size 00:13:12.154 Max: 16 00:13:12.154 Min: 16 00:13:12.154 Number of Namespaces: 32 00:13:12.154 Compare Command: Supported 00:13:12.154 Write Uncorrectable Command: Not Supported 00:13:12.154 Dataset Management Command: Supported 00:13:12.154 Write Zeroes Command: Supported 00:13:12.154 Set Features Save Field: Not Supported 00:13:12.154 Reservations: Not Supported 00:13:12.154 Timestamp: Not Supported 00:13:12.154 Copy: Supported 00:13:12.154 Volatile Write Cache: Present 00:13:12.154 Atomic Write Unit (Normal): 1 00:13:12.154 Atomic Write Unit (PFail): 1 00:13:12.154 Atomic Compare & Write Unit: 1 00:13:12.154 Fused Compare & Write: Supported 00:13:12.154 Scatter-Gather List 00:13:12.154 SGL Command Set: Supported (Dword aligned) 00:13:12.154 SGL Keyed: Not Supported 00:13:12.154 SGL Bit Bucket Descriptor: Not Supported 00:13:12.154 SGL Metadata Pointer: Not Supported 00:13:12.154 Oversized SGL: Not Supported 00:13:12.154 SGL Metadata Address: Not Supported 00:13:12.154 SGL Offset: Not Supported 00:13:12.154 Transport SGL Data Block: Not Supported 00:13:12.154 Replay Protected Memory Block: Not Supported 00:13:12.154 00:13:12.154 Firmware Slot Information 00:13:12.154 ========================= 00:13:12.154 Active slot: 1 00:13:12.154 Slot 1 Firmware Revision: 25.01 00:13:12.154 00:13:12.154 00:13:12.154 Commands Supported and Effects 00:13:12.154 ============================== 00:13:12.154 Admin Commands 00:13:12.154 -------------- 00:13:12.154 Get Log Page (02h): Supported 00:13:12.154 Identify (06h): Supported 00:13:12.154 Abort (08h): Supported 00:13:12.154 Set Features (09h): Supported 00:13:12.154 Get Features (0Ah): Supported 00:13:12.154 Asynchronous Event Request (0Ch): Supported 00:13:12.154 Keep Alive (18h): Supported 00:13:12.154 I/O Commands 00:13:12.154 ------------ 00:13:12.154 Flush (00h): Supported LBA-Change 00:13:12.154 Write (01h): Supported LBA-Change 00:13:12.154 Read (02h): Supported 00:13:12.154 Compare (05h): Supported 00:13:12.154 Write Zeroes (08h): Supported LBA-Change 00:13:12.154 Dataset Management (09h): Supported LBA-Change 00:13:12.154 Copy (19h): Supported LBA-Change 00:13:12.154 00:13:12.154 Error Log 00:13:12.154 ========= 00:13:12.154 00:13:12.154 Arbitration 00:13:12.154 =========== 00:13:12.154 Arbitration Burst: 1 00:13:12.154 00:13:12.154 Power Management 00:13:12.154 ================ 00:13:12.154 Number of Power States: 1 00:13:12.154 Current Power State: Power State #0 00:13:12.154 Power State #0: 00:13:12.154 Max Power: 0.00 W 00:13:12.155 Non-Operational State: Operational 00:13:12.155 Entry Latency: Not Reported 00:13:12.155 Exit Latency: Not Reported 00:13:12.155 Relative Read Throughput: 0 00:13:12.155 Relative Read Latency: 0 00:13:12.155 Relative Write Throughput: 0 00:13:12.155 Relative Write Latency: 0 00:13:12.155 Idle Power: Not Reported 00:13:12.155 Active Power: Not Reported 00:13:12.155 Non-Operational Permissive Mode: Not Supported 00:13:12.155 00:13:12.155 Health Information 00:13:12.155 ================== 00:13:12.155 Critical Warnings: 00:13:12.155 Available Spare Space: OK 00:13:12.155 Temperature: OK 00:13:12.155 Device Reliability: OK 00:13:12.155 Read Only: No 00:13:12.155 Volatile Memory Backup: OK 00:13:12.155 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:12.155 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:12.155 Available Spare: 0% 00:13:12.155 Available Sp[2024-10-30 12:24:44.688441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:12.155 [2024-10-30 12:24:44.688458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:12.155 [2024-10-30 12:24:44.688507] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:12.155 [2024-10-30 12:24:44.688525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.155 [2024-10-30 12:24:44.688550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.155 [2024-10-30 12:24:44.688561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.155 [2024-10-30 12:24:44.688570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.155 [2024-10-30 12:24:44.688946] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:12.155 [2024-10-30 12:24:44.688966] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:12.155 [2024-10-30 12:24:44.689950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:12.155 [2024-10-30 12:24:44.690047] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:12.155 [2024-10-30 12:24:44.690061] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:12.155 [2024-10-30 12:24:44.690964] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:12.155 [2024-10-30 12:24:44.690986] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:12.155 [2024-10-30 12:24:44.691038] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:12.155 [2024-10-30 12:24:44.696267] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:12.155 are Threshold: 0% 00:13:12.155 Life Percentage Used: 0% 00:13:12.155 Data Units Read: 0 00:13:12.155 Data Units Written: 0 00:13:12.155 Host Read Commands: 0 00:13:12.155 Host Write Commands: 0 00:13:12.155 Controller Busy Time: 0 minutes 00:13:12.155 Power Cycles: 0 00:13:12.155 Power On Hours: 0 hours 00:13:12.155 Unsafe Shutdowns: 0 00:13:12.155 Unrecoverable Media Errors: 0 00:13:12.155 Lifetime Error Log Entries: 0 00:13:12.155 Warning Temperature Time: 0 minutes 00:13:12.155 Critical Temperature Time: 0 minutes 00:13:12.155 00:13:12.155 Number of Queues 00:13:12.155 ================ 00:13:12.155 Number of I/O Submission Queues: 127 00:13:12.155 Number of I/O Completion Queues: 127 00:13:12.155 00:13:12.155 Active Namespaces 00:13:12.155 ================= 00:13:12.155 Namespace ID:1 00:13:12.155 Error Recovery Timeout: Unlimited 00:13:12.155 Command Set Identifier: NVM (00h) 00:13:12.155 Deallocate: Supported 00:13:12.155 Deallocated/Unwritten Error: Not Supported 00:13:12.155 Deallocated Read Value: Unknown 00:13:12.155 Deallocate in Write Zeroes: Not Supported 00:13:12.155 Deallocated Guard Field: 0xFFFF 00:13:12.155 Flush: Supported 00:13:12.155 Reservation: Supported 00:13:12.155 Namespace Sharing Capabilities: Multiple Controllers 00:13:12.155 Size (in LBAs): 131072 (0GiB) 00:13:12.155 Capacity (in LBAs): 131072 (0GiB) 00:13:12.155 Utilization (in LBAs): 131072 (0GiB) 00:13:12.155 NGUID: 094F67FCF495437EBDA735CC01B6FFA8 00:13:12.155 UUID: 094f67fc-f495-437e-bda7-35cc01b6ffa8 00:13:12.155 Thin Provisioning: Not Supported 00:13:12.155 Per-NS Atomic Units: Yes 00:13:12.155 Atomic Boundary Size (Normal): 0 00:13:12.155 Atomic Boundary Size (PFail): 0 00:13:12.155 Atomic Boundary Offset: 0 00:13:12.155 Maximum Single Source Range Length: 65535 00:13:12.155 Maximum Copy Length: 65535 00:13:12.155 Maximum Source Range Count: 1 00:13:12.155 NGUID/EUI64 Never Reused: No 00:13:12.155 Namespace Write Protected: No 00:13:12.155 Number of LBA Formats: 1 00:13:12.155 Current LBA Format: LBA Format #00 00:13:12.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:12.155 00:13:12.155 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:12.415 [2024-10-30 12:24:44.946164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:17.679 Initializing NVMe Controllers 00:13:17.679 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:17.679 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:17.679 Initialization complete. Launching workers. 00:13:17.679 ======================================================== 00:13:17.679 Latency(us) 00:13:17.679 Device Information : IOPS MiB/s Average min max 00:13:17.679 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34054.00 133.02 3760.27 1161.97 7472.18 00:13:17.679 ======================================================== 00:13:17.679 Total : 34054.00 133.02 3760.27 1161.97 7472.18 00:13:17.679 00:13:17.679 [2024-10-30 12:24:49.972853] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:17.679 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:17.679 [2024-10-30 12:24:50.234111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.941 Initializing NVMe Controllers 00:13:22.941 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:22.941 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:22.941 Initialization complete. Launching workers. 00:13:22.941 ======================================================== 00:13:22.941 Latency(us) 00:13:22.941 Device Information : IOPS MiB/s Average min max 00:13:22.941 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15946.93 62.29 8025.81 6933.60 15960.20 00:13:22.941 ======================================================== 00:13:22.941 Total : 15946.93 62.29 8025.81 6933.60 15960.20 00:13:22.941 00:13:22.941 [2024-10-30 12:24:55.266283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.941 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:22.941 [2024-10-30 12:24:55.498391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:28.207 [2024-10-30 12:25:00.578677] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:28.207 Initializing NVMe Controllers 00:13:28.207 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:28.207 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:28.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:28.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:28.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:28.207 Initialization complete. Launching workers. 00:13:28.207 Starting thread on core 2 00:13:28.207 Starting thread on core 3 00:13:28.207 Starting thread on core 1 00:13:28.208 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:28.208 [2024-10-30 12:25:00.885888] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:31.488 [2024-10-30 12:25:03.949538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:31.488 Initializing NVMe Controllers 00:13:31.488 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:31.488 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:31.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:31.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:31.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:31.488 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:31.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:31.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:31.488 Initialization complete. Launching workers. 00:13:31.488 Starting thread on core 1 with urgent priority queue 00:13:31.488 Starting thread on core 2 with urgent priority queue 00:13:31.488 Starting thread on core 3 with urgent priority queue 00:13:31.488 Starting thread on core 0 with urgent priority queue 00:13:31.488 SPDK bdev Controller (SPDK1 ) core 0: 6096.33 IO/s 16.40 secs/100000 ios 00:13:31.488 SPDK bdev Controller (SPDK1 ) core 1: 6250.33 IO/s 16.00 secs/100000 ios 00:13:31.488 SPDK bdev Controller (SPDK1 ) core 2: 5695.33 IO/s 17.56 secs/100000 ios 00:13:31.488 SPDK bdev Controller (SPDK1 ) core 3: 6273.33 IO/s 15.94 secs/100000 ios 00:13:31.488 ======================================================== 00:13:31.488 00:13:31.488 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:31.745 [2024-10-30 12:25:04.259757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:31.745 Initializing NVMe Controllers 00:13:31.745 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:31.745 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:31.745 Namespace ID: 1 size: 0GB 00:13:31.745 Initialization complete. 00:13:31.745 INFO: using host memory buffer for IO 00:13:31.745 Hello world! 00:13:31.745 [2024-10-30 12:25:04.294373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:31.745 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:32.003 [2024-10-30 12:25:04.611957] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.374 Initializing NVMe Controllers 00:13:33.374 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.374 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.374 Initialization complete. Launching workers. 00:13:33.374 submit (in ns) avg, min, max = 8526.2, 3506.7, 4012840.0 00:13:33.374 complete (in ns) avg, min, max = 24340.5, 2062.2, 7990026.7 00:13:33.374 00:13:33.374 Submit histogram 00:13:33.374 ================ 00:13:33.374 Range in us Cumulative Count 00:13:33.374 3.484 - 3.508: 0.0227% ( 3) 00:13:33.374 3.508 - 3.532: 0.2123% ( 25) 00:13:33.374 3.532 - 3.556: 1.0615% ( 112) 00:13:33.374 3.556 - 3.579: 3.2375% ( 287) 00:13:33.374 3.579 - 3.603: 7.8626% ( 610) 00:13:33.374 3.603 - 3.627: 15.8996% ( 1060) 00:13:33.374 3.627 - 3.650: 24.6645% ( 1156) 00:13:33.374 3.650 - 3.674: 33.8085% ( 1206) 00:13:33.374 3.674 - 3.698: 41.2010% ( 975) 00:13:33.374 3.698 - 3.721: 48.6314% ( 980) 00:13:33.374 3.721 - 3.745: 52.6575% ( 531) 00:13:33.374 3.745 - 3.769: 56.7822% ( 544) 00:13:33.374 3.769 - 3.793: 60.4974% ( 490) 00:13:33.374 3.793 - 3.816: 64.3794% ( 512) 00:13:33.374 3.816 - 3.840: 67.7534% ( 445) 00:13:33.374 3.840 - 3.864: 71.7719% ( 530) 00:13:33.374 3.864 - 3.887: 76.2529% ( 591) 00:13:33.374 3.887 - 3.911: 80.2639% ( 529) 00:13:33.374 3.911 - 3.935: 83.7971% ( 466) 00:13:33.374 3.935 - 3.959: 86.0566% ( 298) 00:13:33.375 3.959 - 3.982: 87.8232% ( 233) 00:13:33.375 3.982 - 4.006: 89.0970% ( 168) 00:13:33.375 4.006 - 4.030: 90.4921% ( 184) 00:13:33.375 4.030 - 4.053: 91.5232% ( 136) 00:13:33.375 4.053 - 4.077: 92.4710% ( 125) 00:13:33.375 4.077 - 4.101: 93.2899% ( 108) 00:13:33.375 4.101 - 4.124: 94.0177% ( 96) 00:13:33.375 4.124 - 4.148: 94.7001% ( 90) 00:13:33.375 4.148 - 4.172: 95.2385% ( 71) 00:13:33.375 4.172 - 4.196: 95.6327% ( 52) 00:13:33.375 4.196 - 4.219: 95.8678% ( 31) 00:13:33.375 4.219 - 4.243: 96.0497% ( 24) 00:13:33.375 4.243 - 4.267: 96.1938% ( 19) 00:13:33.375 4.267 - 4.290: 96.3151% ( 16) 00:13:33.375 4.290 - 4.314: 96.3530% ( 5) 00:13:33.375 4.314 - 4.338: 96.4061% ( 7) 00:13:33.375 4.338 - 4.361: 96.5047% ( 13) 00:13:33.375 4.361 - 4.385: 96.5881% ( 11) 00:13:33.375 4.385 - 4.409: 96.6487% ( 8) 00:13:33.375 4.409 - 4.433: 96.6563% ( 1) 00:13:33.375 4.433 - 4.456: 96.6791% ( 3) 00:13:33.375 4.456 - 4.480: 96.7018% ( 3) 00:13:33.375 4.480 - 4.504: 96.7397% ( 5) 00:13:33.375 4.504 - 4.527: 96.7852% ( 6) 00:13:33.375 4.527 - 4.551: 96.8155% ( 4) 00:13:33.375 4.551 - 4.575: 96.8383% ( 3) 00:13:33.375 4.575 - 4.599: 96.8534% ( 2) 00:13:33.375 4.599 - 4.622: 96.8686% ( 2) 00:13:33.375 4.646 - 4.670: 96.8838% ( 2) 00:13:33.375 4.670 - 4.693: 96.8913% ( 1) 00:13:33.375 4.693 - 4.717: 96.8989% ( 1) 00:13:33.375 4.717 - 4.741: 96.9293% ( 4) 00:13:33.375 4.741 - 4.764: 96.9368% ( 1) 00:13:33.375 4.764 - 4.788: 96.9748% ( 5) 00:13:33.375 4.788 - 4.812: 97.0202% ( 6) 00:13:33.375 4.812 - 4.836: 97.0506% ( 4) 00:13:33.375 4.836 - 4.859: 97.0961% ( 6) 00:13:33.375 4.859 - 4.883: 97.1416% ( 6) 00:13:33.375 4.883 - 4.907: 97.1567% ( 2) 00:13:33.375 4.907 - 4.930: 97.2022% ( 6) 00:13:33.375 4.930 - 4.954: 97.2553% ( 7) 00:13:33.375 4.954 - 4.978: 97.3159% ( 8) 00:13:33.375 4.978 - 5.001: 97.3918% ( 10) 00:13:33.375 5.001 - 5.025: 97.4903% ( 13) 00:13:33.375 5.025 - 5.049: 97.5434% ( 7) 00:13:33.375 5.049 - 5.073: 97.6041% ( 8) 00:13:33.375 5.073 - 5.096: 97.6496% ( 6) 00:13:33.375 5.096 - 5.120: 97.7330% ( 11) 00:13:33.375 5.120 - 5.144: 97.7785% ( 6) 00:13:33.375 5.144 - 5.167: 97.8164% ( 5) 00:13:33.375 5.167 - 5.191: 97.8543% ( 5) 00:13:33.375 5.191 - 5.215: 97.9225% ( 9) 00:13:33.375 5.215 - 5.239: 97.9680% ( 6) 00:13:33.375 5.239 - 5.262: 98.0059% ( 5) 00:13:33.375 5.262 - 5.286: 98.0211% ( 2) 00:13:33.375 5.286 - 5.310: 98.0362% ( 2) 00:13:33.375 5.310 - 5.333: 98.0438% ( 1) 00:13:33.375 5.333 - 5.357: 98.0514% ( 1) 00:13:33.375 5.357 - 5.381: 98.0590% ( 1) 00:13:33.375 5.381 - 5.404: 98.0666% ( 1) 00:13:33.375 5.404 - 5.428: 98.0742% ( 1) 00:13:33.375 5.428 - 5.452: 98.0969% ( 3) 00:13:33.375 5.452 - 5.476: 98.1196% ( 3) 00:13:33.375 5.476 - 5.499: 98.1272% ( 1) 00:13:33.375 5.499 - 5.523: 98.1424% ( 2) 00:13:33.375 5.547 - 5.570: 98.1500% ( 1) 00:13:33.375 5.570 - 5.594: 98.1576% ( 1) 00:13:33.375 5.665 - 5.689: 98.1651% ( 1) 00:13:33.375 5.736 - 5.760: 98.1727% ( 1) 00:13:33.375 5.760 - 5.784: 98.1879% ( 2) 00:13:33.375 5.784 - 5.807: 98.1955% ( 1) 00:13:33.375 5.831 - 5.855: 98.2030% ( 1) 00:13:33.375 6.116 - 6.163: 98.2106% ( 1) 00:13:33.375 6.163 - 6.210: 98.2182% ( 1) 00:13:33.375 6.542 - 6.590: 98.2258% ( 1) 00:13:33.375 6.684 - 6.732: 98.2334% ( 1) 00:13:33.375 6.732 - 6.779: 98.2485% ( 2) 00:13:33.375 6.874 - 6.921: 98.2561% ( 1) 00:13:33.375 7.301 - 7.348: 98.2637% ( 1) 00:13:33.375 7.396 - 7.443: 98.2713% ( 1) 00:13:33.375 7.490 - 7.538: 98.2789% ( 1) 00:13:33.375 7.585 - 7.633: 98.2940% ( 2) 00:13:33.375 7.633 - 7.680: 98.3016% ( 1) 00:13:33.375 7.680 - 7.727: 98.3092% ( 1) 00:13:33.375 7.727 - 7.775: 98.3168% ( 1) 00:13:33.375 7.775 - 7.822: 98.3244% ( 1) 00:13:33.375 7.870 - 7.917: 98.3319% ( 1) 00:13:33.375 7.917 - 7.964: 98.3471% ( 2) 00:13:33.375 7.964 - 8.012: 98.3547% ( 1) 00:13:33.375 8.012 - 8.059: 98.3699% ( 2) 00:13:33.375 8.154 - 8.201: 98.3774% ( 1) 00:13:33.375 8.201 - 8.249: 98.4002% ( 3) 00:13:33.375 8.249 - 8.296: 98.4078% ( 1) 00:13:33.375 8.296 - 8.344: 98.4153% ( 1) 00:13:33.375 8.344 - 8.391: 98.4229% ( 1) 00:13:33.375 8.391 - 8.439: 98.4457% ( 3) 00:13:33.375 8.439 - 8.486: 98.4608% ( 2) 00:13:33.375 8.486 - 8.533: 98.4836% ( 3) 00:13:33.375 8.533 - 8.581: 98.5063% ( 3) 00:13:33.375 8.581 - 8.628: 98.5215% ( 2) 00:13:33.375 8.628 - 8.676: 98.5291% ( 1) 00:13:33.375 8.770 - 8.818: 98.5367% ( 1) 00:13:33.375 8.818 - 8.865: 98.5518% ( 2) 00:13:33.375 8.865 - 8.913: 98.5594% ( 1) 00:13:33.375 8.960 - 9.007: 98.5670% ( 1) 00:13:33.375 9.007 - 9.055: 98.5746% ( 1) 00:13:33.375 9.102 - 9.150: 98.5822% ( 1) 00:13:33.375 9.197 - 9.244: 98.5897% ( 1) 00:13:33.375 9.244 - 9.292: 98.5973% ( 1) 00:13:33.375 9.576 - 9.624: 98.6049% ( 1) 00:13:33.375 9.671 - 9.719: 98.6125% ( 1) 00:13:33.375 9.766 - 9.813: 98.6201% ( 1) 00:13:33.375 9.861 - 9.908: 98.6276% ( 1) 00:13:33.375 9.956 - 10.003: 98.6352% ( 1) 00:13:33.375 10.003 - 10.050: 98.6428% ( 1) 00:13:33.375 10.240 - 10.287: 98.6504% ( 1) 00:13:33.375 10.335 - 10.382: 98.6580% ( 1) 00:13:33.375 10.477 - 10.524: 98.6656% ( 1) 00:13:33.375 10.572 - 10.619: 98.6731% ( 1) 00:13:33.375 10.619 - 10.667: 98.6807% ( 1) 00:13:33.375 10.667 - 10.714: 98.6883% ( 1) 00:13:33.375 11.141 - 11.188: 98.6959% ( 1) 00:13:33.375 11.188 - 11.236: 98.7035% ( 1) 00:13:33.375 11.236 - 11.283: 98.7110% ( 1) 00:13:33.375 11.852 - 11.899: 98.7186% ( 1) 00:13:33.375 12.136 - 12.231: 98.7262% ( 1) 00:13:33.375 12.516 - 12.610: 98.7338% ( 1) 00:13:33.375 12.610 - 12.705: 98.7414% ( 1) 00:13:33.375 12.705 - 12.800: 98.7490% ( 1) 00:13:33.375 12.800 - 12.895: 98.7565% ( 1) 00:13:33.375 12.895 - 12.990: 98.7717% ( 2) 00:13:33.375 12.990 - 13.084: 98.7793% ( 1) 00:13:33.375 13.084 - 13.179: 98.7869% ( 1) 00:13:33.375 13.274 - 13.369: 98.7944% ( 1) 00:13:33.375 13.464 - 13.559: 98.8020% ( 1) 00:13:33.375 13.748 - 13.843: 98.8096% ( 1) 00:13:33.375 13.843 - 13.938: 98.8172% ( 1) 00:13:33.375 14.601 - 14.696: 98.8248% ( 1) 00:13:33.375 17.067 - 17.161: 98.8399% ( 2) 00:13:33.375 17.161 - 17.256: 98.8551% ( 2) 00:13:33.375 17.256 - 17.351: 98.8627% ( 1) 00:13:33.375 17.351 - 17.446: 98.8703% ( 1) 00:13:33.375 17.446 - 17.541: 98.9082% ( 5) 00:13:33.375 17.541 - 17.636: 98.9158% ( 1) 00:13:33.375 17.636 - 17.730: 98.9764% ( 8) 00:13:33.375 17.730 - 17.825: 99.0295% ( 7) 00:13:33.375 17.825 - 17.920: 99.1129% ( 11) 00:13:33.375 17.920 - 18.015: 99.1736% ( 8) 00:13:33.375 18.015 - 18.110: 99.2266% ( 7) 00:13:33.375 18.110 - 18.204: 99.3024% ( 10) 00:13:33.375 18.204 - 18.299: 99.4162% ( 15) 00:13:33.375 18.299 - 18.394: 99.4996% ( 11) 00:13:33.375 18.394 - 18.489: 99.5527% ( 7) 00:13:33.375 18.489 - 18.584: 99.6361% ( 11) 00:13:33.375 18.584 - 18.679: 99.6816% ( 6) 00:13:33.375 18.679 - 18.773: 99.7195% ( 5) 00:13:33.375 18.773 - 18.868: 99.7346% ( 2) 00:13:33.375 18.868 - 18.963: 99.7574% ( 3) 00:13:33.375 18.963 - 19.058: 99.7650% ( 1) 00:13:33.375 19.058 - 19.153: 99.7725% ( 1) 00:13:33.375 19.153 - 19.247: 99.7953% ( 3) 00:13:33.375 19.247 - 19.342: 99.8029% ( 1) 00:13:33.375 19.532 - 19.627: 99.8104% ( 1) 00:13:33.375 19.816 - 19.911: 99.8180% ( 1) 00:13:33.375 21.807 - 21.902: 99.8256% ( 1) 00:13:33.375 22.566 - 22.661: 99.8332% ( 1) 00:13:33.375 23.324 - 23.419: 99.8408% ( 1) 00:13:33.375 23.988 - 24.083: 99.8484% ( 1) 00:13:33.375 25.600 - 25.790: 99.8559% ( 1) 00:13:33.375 26.548 - 26.738: 99.8635% ( 1) 00:13:33.375 27.307 - 27.496: 99.8711% ( 1) 00:13:33.375 28.634 - 28.824: 99.8787% ( 1) 00:13:33.375 29.013 - 29.203: 99.8863% ( 1) 00:13:33.375 3980.705 - 4004.978: 99.9924% ( 14) 00:13:33.375 4004.978 - 4029.250: 100.0000% ( 1) 00:13:33.375 00:13:33.375 Complete histogram 00:13:33.375 ================== 00:13:33.375 Range in us Cumulative Count 00:13:33.375 2.062 - 2.074: 9.6671% ( 1275) 00:13:33.375 2.074 - 2.086: 28.0006% ( 2418) 00:13:33.375 2.086 - 2.098: 30.1008% ( 277) 00:13:33.375 2.098 - 2.110: 46.3037% ( 2137) 00:13:33.375 2.110 - 2.121: 56.3500% ( 1325) 00:13:33.375 2.121 - 2.133: 58.4578% ( 278) 00:13:33.375 2.133 - 2.145: 65.4485% ( 922) 00:13:33.375 2.145 - 2.157: 69.7627% ( 569) 00:13:33.375 2.157 - 2.169: 71.2260% ( 193) 00:13:33.375 2.169 - 2.181: 76.5259% ( 699) 00:13:33.375 2.181 - 2.193: 79.1190% ( 342) 00:13:33.375 2.193 - 2.204: 79.9682% ( 112) 00:13:33.375 2.204 - 2.216: 82.9327% ( 391) 00:13:33.375 2.216 - 2.228: 85.5789% ( 349) 00:13:33.375 2.228 - 2.240: 87.5047% ( 254) 00:13:33.375 2.240 - 2.252: 90.5224% ( 398) 00:13:33.376 2.252 - 2.264: 92.0237% ( 198) 00:13:33.376 2.264 - 2.276: 92.4255% ( 53) 00:13:33.376 2.276 - 2.287: 93.0321% ( 80) 00:13:33.376 2.287 - 2.299: 93.4643% ( 57) 00:13:33.376 2.299 - 2.311: 94.2376% ( 102) 00:13:33.376 2.311 - 2.323: 94.7456% ( 67) 00:13:33.376 2.323 - 2.335: 94.8897% ( 19) 00:13:33.376 2.335 - 2.347: 94.9807% ( 12) 00:13:33.376 2.347 - 2.359: 95.0868% ( 14) 00:13:33.376 2.359 - 2.370: 95.1778% ( 12) 00:13:33.376 2.370 - 2.382: 95.3901% ( 28) 00:13:33.376 2.382 - 2.394: 95.5493% ( 21) 00:13:33.376 2.394 - 2.406: 95.6934% ( 19) 00:13:33.376 2.406 - 2.418: 95.7844% ( 12) 00:13:33.376 2.418 - 2.430: 95.9512% ( 22) 00:13:33.376 2.430 - 2.441: 96.0194% ( 9) 00:13:33.376 2.441 - 2.453: 96.1711% ( 20) 00:13:33.376 2.453 - 2.465: 96.3454% ( 23) 00:13:33.376 2.465 - 2.477: 96.4895% ( 19) 00:13:33.376 2.477 - 2.489: 96.6260% ( 18) 00:13:33.376 2.489 - 2.501: 96.7549% ( 17) 00:13:33.376 2.501 - 2.513: 96.9065% ( 20) 00:13:33.376 2.513 - 2.524: 97.0657% ( 21) 00:13:33.376 2.524 - 2.536: 97.2401% ( 23) 00:13:33.376 2.536 - 2.548: 97.4069% ( 22) 00:13:33.376 2.548 - 2.560: 97.5055% ( 13) 00:13:33.376 2.560 - 2.572: 97.5586% ( 7) 00:13:33.376 2.572 - 2.584: 97.6420% ( 11) 00:13:33.376 2.584 - 2.596: 97.7102% ( 9) 00:13:33.376 2.596 - 2.607: 97.7633% ( 7) 00:13:33.376 2.607 - 2.619: 97.8088% ( 6) 00:13:33.376 2.619 - 2.631: 97.8164% ( 1) 00:13:33.376 2.631 - 2.643: 97.8619% ( 6) 00:13:33.376 2.643 - 2.655: 97.8846% ( 3) 00:13:33.376 2.655 - 2.667: 97.8922% ( 1) 00:13:33.376 2.667 - 2.679: 97.9073% ( 2) 00:13:33.376 2.679 - 2.690: 97.9301% ( 3) 00:13:33.376 2.690 - 2.702: 97.9377% ( 1) 00:13:33.376 2.714 - 2.726: 97.9453% ( 1) 00:13:33.376 2.726 - 2.738: 97.9528% ( 1) 00:13:33.376 2.773 - 2.785: 97.9680% ( 2) 00:13:33.376 2.785 - 2.797: 97.9756% ( 1) 00:13:33.376 2.797 - 2.809: 97.9907% ( 2) 00:13:33.376 2.809 - 2.821: 97.9983% ( 1) 00:13:33.376 2.833 - 2.844: 98.0211% ( 3) 00:13:33.376 2.844 - 2.856: 98.0287% ( 1) 00:13:33.376 2.868 - 2.880: 98.0438% ( 2) 00:13:33.376 2.880 - 2.892: 98.0590% ( 2) 00:13:33.376 2.892 - 2.904: 98.0666% ( 1) 00:13:33.376 2.916 - 2.927: 98.0742% ( 1) 00:13:33.376 2.927 - 2.939: 98.1045% ( 4) 00:13:33.376 2.939 - 2.951: 98.1424% ( 5) 00:13:33.376 2.963 - 2.975: 98.1576% ( 2) 00:13:33.376 2.975 - 2.987: 98.1727% ( 2) 00:13:33.376 2.987 - 2.999: 98.1955% ( 3) 00:13:33.376 2.999 - 3.010: 98.2030% ( 1) 00:13:33.376 3.010 - 3.022: 98.2334% ( 4) 00:13:33.376 3.034 - 3.058: 98.2637% ( 4) 00:13:33.376 3.058 - 3.081: 98.2789% ( 2) 00:13:33.376 3.081 - 3.105: 98.3168% ( 5) 00:13:33.376 3.105 - 3.129: 98.3395% ( 3) 00:13:33.376 3.129 - 3.153: 98.3547% ( 2) 00:13:33.376 3.153 - 3.176: 98.3623% ( 1) 00:13:33.376 3.176 - 3.200: 98.3699% ( 1) 00:13:33.376 3.200 - 3.224: 98.3926% ( 3) 00:13:33.376 3.224 - 3.247: 98.4153% ( 3) 00:13:33.376 3.271 - 3.295: 98.4229% ( 1) 00:13:33.376 3.295 - 3.319: 98.4381% ( 2) 00:13:33.376 3.390 - 3.413: 98.4457% ( 1) 00:13:33.376 3.461 - 3.484: 98.4533% ( 1) 00:13:33.376 3.556 - 3.579: 98.4608% ( 1) 00:13:33.376 3.650 - 3.674: 98.4760% ( 2) 00:13:33.376 3.721 - 3.745: 98.5063% ( 4) 00:13:33.376 3.745 - 3.769: 98.5139% ( 1) 00:13:33.376 3.793 - 3.816: 98.5215% ( 1) 00:13:33.376 3.840 - 3.864: 98.5442% ( 3) 00:13:33.376 3.864 - 3.887: 98.5670% ( 3) 00:13:33.376 3.887 - 3.911: 98.5822% ( 2) 00:13:33.376 3.959 - 3.982: 98.5897% ( 1) 00:13:33.376 3.982 - 4.006: 98.6049% ( 2) 00:13:33.376 4.006 - 4.030: 98.6125% ( 1) 00:13:33.376 4.101 - 4.124: 98.6201% ( 1) 00:13:33.376 4.124 - 4.148: 98.6352% ( 2) 00:13:33.376 4.148 - 4.172: 98.6428% ( 1) 00:13:33.376 4.290 - 4.314: 98.6504% ( 1) 00:13:33.376 4.575 - 4.599: 98.6580% ( 1) 00:13:33.376 4.717 - 4.741: 98.6656% ( 1) 00:13:33.376 5.523 - 5.547: 98.6731% ( 1) 00:13:33.376 6.258 - 6.305: 98.6807% ( 1) 00:13:33.376 6.305 - 6.353: 98.6883% ( 1) 00:13:33.376 6.353 - 6.400: 98.6959% ( 1) 00:13:33.376 6.400 - 6.447: 98.7035% ( 1) 00:13:33.376 6.542 - 6.590: 98.7186% ( 2) 00:13:33.376 6.684 - 6.732: 98.7338% ( 2) 00:13:33.376 6.732 - 6.779: 98.7414% ( 1) 00:13:33.376 6.827 - 6.874: 98.7490% ( 1) 00:13:33.376 6.921 - 6.969: 98.7641% ( 2) 00:13:33.376 6.969 - 7.016: 98.7717% ( 1) 00:13:33.376 7.159 - 7.206: 98.7869% ( 2) 00:13:33.376 7.206 - 7.253: 98.7944% ( 1) 00:13:33.376 7.253 - 7.301: 98.8020% ( 1) 00:13:33.376 7.348 - 7.396: 98.8096% ( 1) 00:13:33.376 7.585 - 7.633: 98.8172% ( 1) 00:13:33.376 7.822 - 7.870: 98.8248% ( 1) 00:13:33.376 8.533 - 8.581: 98.8324% ( 1) 00:13:33.376 9.150 - 9.197: 98.8399% ( 1) 00:13:33.376 12.421 - 12.516: 98.8475% ( 1) 00:13:33.376 15.170 - 15.265: 98.8627% ( 2) 00:13:33.376 15.550 - 15.644: 98.8703% ( 1) 00:13:33.376 15.644 - 15.739: 98.9006% ( 4) 00:13:33.376 15.834 - 15.929: 98.9233% ( 3) 00:13:33.376 15.929 - 16.024: 98.9764% ( 7) 00:13:33.376 16.024 - 16.119: 99.0067% ( 4) 00:13:33.376 16.119 - 16.213: 99.0295% ( 3) 00:13:33.376 16.213 - 16.308: 99.0674% ( 5) 00:13:33.376 16.308 - 16.403: 99.1053% ( 5) 00:13:33.376 16.403 - 16.498: 99.1356% ( 4) 00:13:33.376 16.498 - 16.593: 99.1736% ( 5) 00:13:33.376 16.593 - 16.687: 99.1887% ( 2) 00:13:33.376 16.687 - 16.782: 99.2418% ( 7) 00:13:33.376 16.782 - 16.877: 99.2873% ( 6) 00:13:33.376 16.877 - 16.972: 99.3100% ( 3) 00:13:33.376 16.972 - 17.067: 99.3176% ( 1) 00:13:33.376 17.067 - 17.161: 99.3404% ( 3) 00:13:33.376 17.161 - 17.256: 99.3479% ( 1) 00:13:33.376 17.256 - 17.351: 99.3707% ( 3) 00:13:33.376 17.351 - 17.446: 99.3783% ( 1) 00:13:33.376 17.446 - 17.541: 99.3934%[2024-10-30 12:25:05.641329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.376 ( 2) 00:13:33.376 17.920 - 18.015: 99.4010% ( 1) 00:13:33.376 18.015 - 18.110: 99.4086% ( 1) 00:13:33.376 18.110 - 18.204: 99.4162% ( 1) 00:13:33.376 18.204 - 18.299: 99.4238% ( 1) 00:13:33.376 19.911 - 20.006: 99.4313% ( 1) 00:13:33.376 20.575 - 20.670: 99.4389% ( 1) 00:13:33.376 25.979 - 26.169: 99.4465% ( 1) 00:13:33.376 1007.313 - 1013.381: 99.4541% ( 1) 00:13:33.376 2803.484 - 2815.621: 99.4617% ( 1) 00:13:33.376 3640.889 - 3665.161: 99.4693% ( 1) 00:13:33.376 3980.705 - 4004.978: 99.9469% ( 63) 00:13:33.376 4004.978 - 4029.250: 99.9924% ( 6) 00:13:33.376 7961.410 - 8009.956: 100.0000% ( 1) 00:13:33.376 00:13:33.376 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:33.376 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:33.376 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:33.376 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:33.376 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:33.376 [ 00:13:33.376 { 00:13:33.376 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:33.376 "subtype": "Discovery", 00:13:33.376 "listen_addresses": [], 00:13:33.376 "allow_any_host": true, 00:13:33.376 "hosts": [] 00:13:33.376 }, 00:13:33.376 { 00:13:33.376 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:33.376 "subtype": "NVMe", 00:13:33.376 "listen_addresses": [ 00:13:33.376 { 00:13:33.376 "trtype": "VFIOUSER", 00:13:33.376 "adrfam": "IPv4", 00:13:33.376 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:33.376 "trsvcid": "0" 00:13:33.376 } 00:13:33.376 ], 00:13:33.376 "allow_any_host": true, 00:13:33.376 "hosts": [], 00:13:33.376 "serial_number": "SPDK1", 00:13:33.376 "model_number": "SPDK bdev Controller", 00:13:33.376 "max_namespaces": 32, 00:13:33.376 "min_cntlid": 1, 00:13:33.376 "max_cntlid": 65519, 00:13:33.376 "namespaces": [ 00:13:33.376 { 00:13:33.376 "nsid": 1, 00:13:33.376 "bdev_name": "Malloc1", 00:13:33.376 "name": "Malloc1", 00:13:33.376 "nguid": "094F67FCF495437EBDA735CC01B6FFA8", 00:13:33.376 "uuid": "094f67fc-f495-437e-bda7-35cc01b6ffa8" 00:13:33.376 } 00:13:33.376 ] 00:13:33.376 }, 00:13:33.376 { 00:13:33.376 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:33.377 "subtype": "NVMe", 00:13:33.377 "listen_addresses": [ 00:13:33.377 { 00:13:33.377 "trtype": "VFIOUSER", 00:13:33.377 "adrfam": "IPv4", 00:13:33.377 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:33.377 "trsvcid": "0" 00:13:33.377 } 00:13:33.377 ], 00:13:33.377 "allow_any_host": true, 00:13:33.377 "hosts": [], 00:13:33.377 "serial_number": "SPDK2", 00:13:33.377 "model_number": "SPDK bdev Controller", 00:13:33.377 "max_namespaces": 32, 00:13:33.377 "min_cntlid": 1, 00:13:33.377 "max_cntlid": 65519, 00:13:33.377 "namespaces": [ 00:13:33.377 { 00:13:33.377 "nsid": 1, 00:13:33.377 "bdev_name": "Malloc2", 00:13:33.377 "name": "Malloc2", 00:13:33.377 "nguid": "BB355CF3EF2C4C74B22AD2DAB585F90D", 00:13:33.377 "uuid": "bb355cf3-ef2c-4c74-b22a-d2dab585f90d" 00:13:33.377 } 00:13:33.377 ] 00:13:33.377 } 00:13:33.377 ] 00:13:33.377 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:33.377 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=588400 00:13:33.377 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:33.377 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:33.377 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:13:33.377 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:33.377 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:33.377 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:13:33.377 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:33.377 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:33.635 [2024-10-30 12:25:06.163726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.635 Malloc3 00:13:33.893 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:33.893 [2024-10-30 12:25:06.565718] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:34.151 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:34.151 Asynchronous Event Request test 00:13:34.151 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.151 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.151 Registering asynchronous event callbacks... 00:13:34.151 Starting namespace attribute notice tests for all controllers... 00:13:34.151 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:34.151 aer_cb - Changed Namespace 00:13:34.151 Cleaning up... 00:13:34.411 [ 00:13:34.411 { 00:13:34.411 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:34.411 "subtype": "Discovery", 00:13:34.411 "listen_addresses": [], 00:13:34.411 "allow_any_host": true, 00:13:34.411 "hosts": [] 00:13:34.411 }, 00:13:34.411 { 00:13:34.411 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:34.411 "subtype": "NVMe", 00:13:34.411 "listen_addresses": [ 00:13:34.411 { 00:13:34.411 "trtype": "VFIOUSER", 00:13:34.411 "adrfam": "IPv4", 00:13:34.411 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:34.411 "trsvcid": "0" 00:13:34.411 } 00:13:34.411 ], 00:13:34.411 "allow_any_host": true, 00:13:34.411 "hosts": [], 00:13:34.411 "serial_number": "SPDK1", 00:13:34.411 "model_number": "SPDK bdev Controller", 00:13:34.411 "max_namespaces": 32, 00:13:34.411 "min_cntlid": 1, 00:13:34.411 "max_cntlid": 65519, 00:13:34.411 "namespaces": [ 00:13:34.411 { 00:13:34.411 "nsid": 1, 00:13:34.411 "bdev_name": "Malloc1", 00:13:34.411 "name": "Malloc1", 00:13:34.411 "nguid": "094F67FCF495437EBDA735CC01B6FFA8", 00:13:34.411 "uuid": "094f67fc-f495-437e-bda7-35cc01b6ffa8" 00:13:34.411 }, 00:13:34.411 { 00:13:34.411 "nsid": 2, 00:13:34.411 "bdev_name": "Malloc3", 00:13:34.411 "name": "Malloc3", 00:13:34.411 "nguid": "AB79771920594C41A285B678BABD84B1", 00:13:34.411 "uuid": "ab797719-2059-4c41-a285-b678babd84b1" 00:13:34.411 } 00:13:34.411 ] 00:13:34.411 }, 00:13:34.411 { 00:13:34.411 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:34.411 "subtype": "NVMe", 00:13:34.411 "listen_addresses": [ 00:13:34.411 { 00:13:34.411 "trtype": "VFIOUSER", 00:13:34.411 "adrfam": "IPv4", 00:13:34.411 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:34.411 "trsvcid": "0" 00:13:34.411 } 00:13:34.411 ], 00:13:34.411 "allow_any_host": true, 00:13:34.411 "hosts": [], 00:13:34.411 "serial_number": "SPDK2", 00:13:34.411 "model_number": "SPDK bdev Controller", 00:13:34.411 "max_namespaces": 32, 00:13:34.411 "min_cntlid": 1, 00:13:34.411 "max_cntlid": 65519, 00:13:34.411 "namespaces": [ 00:13:34.411 { 00:13:34.411 "nsid": 1, 00:13:34.411 "bdev_name": "Malloc2", 00:13:34.411 "name": "Malloc2", 00:13:34.411 "nguid": "BB355CF3EF2C4C74B22AD2DAB585F90D", 00:13:34.411 "uuid": "bb355cf3-ef2c-4c74-b22a-d2dab585f90d" 00:13:34.411 } 00:13:34.411 ] 00:13:34.411 } 00:13:34.411 ] 00:13:34.411 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 588400 00:13:34.411 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:34.411 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:34.411 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:34.411 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:34.411 [2024-10-30 12:25:06.874082] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:13:34.411 [2024-10-30 12:25:06.874124] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588525 ] 00:13:34.411 [2024-10-30 12:25:06.923234] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:34.411 [2024-10-30 12:25:06.928615] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:34.411 [2024-10-30 12:25:06.928645] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f232b50f000 00:13:34.411 [2024-10-30 12:25:06.929633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:34.411 [2024-10-30 12:25:06.930620] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:34.411 [2024-10-30 12:25:06.931633] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:34.411 [2024-10-30 12:25:06.932640] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:34.411 [2024-10-30 12:25:06.933649] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:34.411 [2024-10-30 12:25:06.934657] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:34.411 [2024-10-30 12:25:06.935663] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:34.411 [2024-10-30 12:25:06.936670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:34.411 [2024-10-30 12:25:06.937688] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:34.411 [2024-10-30 12:25:06.937714] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f232b504000 00:13:34.411 [2024-10-30 12:25:06.938834] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:34.411 [2024-10-30 12:25:06.953613] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:34.411 [2024-10-30 12:25:06.953651] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:34.411 [2024-10-30 12:25:06.958754] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:34.411 [2024-10-30 12:25:06.958806] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:34.411 [2024-10-30 12:25:06.958897] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:34.411 [2024-10-30 12:25:06.958923] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:34.411 [2024-10-30 12:25:06.958933] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:34.411 [2024-10-30 12:25:06.959757] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:34.411 [2024-10-30 12:25:06.959778] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:34.411 [2024-10-30 12:25:06.959792] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:34.411 [2024-10-30 12:25:06.960766] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:34.411 [2024-10-30 12:25:06.960787] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:34.411 [2024-10-30 12:25:06.960802] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:34.411 [2024-10-30 12:25:06.961774] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:34.411 [2024-10-30 12:25:06.961795] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:34.411 [2024-10-30 12:25:06.962781] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:34.411 [2024-10-30 12:25:06.962802] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:34.411 [2024-10-30 12:25:06.962811] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:34.411 [2024-10-30 12:25:06.962823] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:34.411 [2024-10-30 12:25:06.962933] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:34.411 [2024-10-30 12:25:06.962941] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:34.411 [2024-10-30 12:25:06.962949] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:34.411 [2024-10-30 12:25:06.963784] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:34.411 [2024-10-30 12:25:06.964789] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:34.411 [2024-10-30 12:25:06.965797] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:34.411 [2024-10-30 12:25:06.966789] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:34.412 [2024-10-30 12:25:06.966872] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:34.412 [2024-10-30 12:25:06.967809] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:34.412 [2024-10-30 12:25:06.967844] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:34.412 [2024-10-30 12:25:06.967854] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.967878] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:34.412 [2024-10-30 12:25:06.967891] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.967912] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:34.412 [2024-10-30 12:25:06.967922] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.412 [2024-10-30 12:25:06.967928] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.412 [2024-10-30 12:25:06.967946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:06.974273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:06.974296] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:34.412 [2024-10-30 12:25:06.974305] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:34.412 [2024-10-30 12:25:06.974320] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:34.412 [2024-10-30 12:25:06.974328] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:34.412 [2024-10-30 12:25:06.974336] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:34.412 [2024-10-30 12:25:06.974344] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:34.412 [2024-10-30 12:25:06.974352] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.974365] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.974380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:06.982270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:06.982300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.412 [2024-10-30 12:25:06.982314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.412 [2024-10-30 12:25:06.982330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.412 [2024-10-30 12:25:06.982343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.412 [2024-10-30 12:25:06.982352] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.982364] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.982377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:06.990268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:06.990291] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:34.412 [2024-10-30 12:25:06.990302] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.990322] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.990332] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.990346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:06.998280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:06.998357] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.998375] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:06.998388] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:34.412 [2024-10-30 12:25:06.998397] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:34.412 [2024-10-30 12:25:06.998403] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.412 [2024-10-30 12:25:06.998412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:07.006267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:07.006297] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:34.412 [2024-10-30 12:25:07.006314] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.006328] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.006341] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:34.412 [2024-10-30 12:25:07.006350] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.412 [2024-10-30 12:25:07.006356] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.412 [2024-10-30 12:25:07.006370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:07.014267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:07.014297] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.014314] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.014327] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:34.412 [2024-10-30 12:25:07.014335] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.412 [2024-10-30 12:25:07.014342] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.412 [2024-10-30 12:25:07.014351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:07.022269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:07.022291] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.022304] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.022318] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.022329] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.022337] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.022346] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.022354] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:34.412 [2024-10-30 12:25:07.022362] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:34.412 [2024-10-30 12:25:07.022370] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:34.412 [2024-10-30 12:25:07.022394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:07.030266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:07.030293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:07.038265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:07.038300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:07.046267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:07.046293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:34.412 [2024-10-30 12:25:07.054269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:34.412 [2024-10-30 12:25:07.054300] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:34.412 [2024-10-30 12:25:07.054311] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:34.412 [2024-10-30 12:25:07.054317] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:34.412 [2024-10-30 12:25:07.054323] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:34.412 [2024-10-30 12:25:07.054329] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:34.412 [2024-10-30 12:25:07.054338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:34.412 [2024-10-30 12:25:07.054350] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:34.412 [2024-10-30 12:25:07.054358] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:34.412 [2024-10-30 12:25:07.054364] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.413 [2024-10-30 12:25:07.054373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:34.413 [2024-10-30 12:25:07.054383] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:34.413 [2024-10-30 12:25:07.054391] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.413 [2024-10-30 12:25:07.054397] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.413 [2024-10-30 12:25:07.054406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.413 [2024-10-30 12:25:07.054422] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:34.413 [2024-10-30 12:25:07.054431] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:34.413 [2024-10-30 12:25:07.054437] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.413 [2024-10-30 12:25:07.054446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:34.413 [2024-10-30 12:25:07.062267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:34.413 [2024-10-30 12:25:07.062294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:34.413 [2024-10-30 12:25:07.062312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:34.413 [2024-10-30 12:25:07.062324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:34.413 ===================================================== 00:13:34.413 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:34.413 ===================================================== 00:13:34.413 Controller Capabilities/Features 00:13:34.413 ================================ 00:13:34.413 Vendor ID: 4e58 00:13:34.413 Subsystem Vendor ID: 4e58 00:13:34.413 Serial Number: SPDK2 00:13:34.413 Model Number: SPDK bdev Controller 00:13:34.413 Firmware Version: 25.01 00:13:34.413 Recommended Arb Burst: 6 00:13:34.413 IEEE OUI Identifier: 8d 6b 50 00:13:34.413 Multi-path I/O 00:13:34.413 May have multiple subsystem ports: Yes 00:13:34.413 May have multiple controllers: Yes 00:13:34.413 Associated with SR-IOV VF: No 00:13:34.413 Max Data Transfer Size: 131072 00:13:34.413 Max Number of Namespaces: 32 00:13:34.413 Max Number of I/O Queues: 127 00:13:34.413 NVMe Specification Version (VS): 1.3 00:13:34.413 NVMe Specification Version (Identify): 1.3 00:13:34.413 Maximum Queue Entries: 256 00:13:34.413 Contiguous Queues Required: Yes 00:13:34.413 Arbitration Mechanisms Supported 00:13:34.413 Weighted Round Robin: Not Supported 00:13:34.413 Vendor Specific: Not Supported 00:13:34.413 Reset Timeout: 15000 ms 00:13:34.413 Doorbell Stride: 4 bytes 00:13:34.413 NVM Subsystem Reset: Not Supported 00:13:34.413 Command Sets Supported 00:13:34.413 NVM Command Set: Supported 00:13:34.413 Boot Partition: Not Supported 00:13:34.413 Memory Page Size Minimum: 4096 bytes 00:13:34.413 Memory Page Size Maximum: 4096 bytes 00:13:34.413 Persistent Memory Region: Not Supported 00:13:34.413 Optional Asynchronous Events Supported 00:13:34.413 Namespace Attribute Notices: Supported 00:13:34.413 Firmware Activation Notices: Not Supported 00:13:34.413 ANA Change Notices: Not Supported 00:13:34.413 PLE Aggregate Log Change Notices: Not Supported 00:13:34.413 LBA Status Info Alert Notices: Not Supported 00:13:34.413 EGE Aggregate Log Change Notices: Not Supported 00:13:34.413 Normal NVM Subsystem Shutdown event: Not Supported 00:13:34.413 Zone Descriptor Change Notices: Not Supported 00:13:34.413 Discovery Log Change Notices: Not Supported 00:13:34.413 Controller Attributes 00:13:34.413 128-bit Host Identifier: Supported 00:13:34.413 Non-Operational Permissive Mode: Not Supported 00:13:34.413 NVM Sets: Not Supported 00:13:34.413 Read Recovery Levels: Not Supported 00:13:34.413 Endurance Groups: Not Supported 00:13:34.413 Predictable Latency Mode: Not Supported 00:13:34.413 Traffic Based Keep ALive: Not Supported 00:13:34.413 Namespace Granularity: Not Supported 00:13:34.413 SQ Associations: Not Supported 00:13:34.413 UUID List: Not Supported 00:13:34.413 Multi-Domain Subsystem: Not Supported 00:13:34.413 Fixed Capacity Management: Not Supported 00:13:34.413 Variable Capacity Management: Not Supported 00:13:34.413 Delete Endurance Group: Not Supported 00:13:34.413 Delete NVM Set: Not Supported 00:13:34.413 Extended LBA Formats Supported: Not Supported 00:13:34.413 Flexible Data Placement Supported: Not Supported 00:13:34.413 00:13:34.413 Controller Memory Buffer Support 00:13:34.413 ================================ 00:13:34.413 Supported: No 00:13:34.413 00:13:34.413 Persistent Memory Region Support 00:13:34.413 ================================ 00:13:34.413 Supported: No 00:13:34.413 00:13:34.413 Admin Command Set Attributes 00:13:34.413 ============================ 00:13:34.413 Security Send/Receive: Not Supported 00:13:34.413 Format NVM: Not Supported 00:13:34.413 Firmware Activate/Download: Not Supported 00:13:34.413 Namespace Management: Not Supported 00:13:34.413 Device Self-Test: Not Supported 00:13:34.413 Directives: Not Supported 00:13:34.413 NVMe-MI: Not Supported 00:13:34.413 Virtualization Management: Not Supported 00:13:34.413 Doorbell Buffer Config: Not Supported 00:13:34.413 Get LBA Status Capability: Not Supported 00:13:34.413 Command & Feature Lockdown Capability: Not Supported 00:13:34.413 Abort Command Limit: 4 00:13:34.413 Async Event Request Limit: 4 00:13:34.413 Number of Firmware Slots: N/A 00:13:34.413 Firmware Slot 1 Read-Only: N/A 00:13:34.413 Firmware Activation Without Reset: N/A 00:13:34.413 Multiple Update Detection Support: N/A 00:13:34.413 Firmware Update Granularity: No Information Provided 00:13:34.413 Per-Namespace SMART Log: No 00:13:34.413 Asymmetric Namespace Access Log Page: Not Supported 00:13:34.413 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:34.413 Command Effects Log Page: Supported 00:13:34.413 Get Log Page Extended Data: Supported 00:13:34.413 Telemetry Log Pages: Not Supported 00:13:34.413 Persistent Event Log Pages: Not Supported 00:13:34.413 Supported Log Pages Log Page: May Support 00:13:34.413 Commands Supported & Effects Log Page: Not Supported 00:13:34.413 Feature Identifiers & Effects Log Page:May Support 00:13:34.413 NVMe-MI Commands & Effects Log Page: May Support 00:13:34.413 Data Area 4 for Telemetry Log: Not Supported 00:13:34.413 Error Log Page Entries Supported: 128 00:13:34.413 Keep Alive: Supported 00:13:34.413 Keep Alive Granularity: 10000 ms 00:13:34.413 00:13:34.413 NVM Command Set Attributes 00:13:34.413 ========================== 00:13:34.413 Submission Queue Entry Size 00:13:34.413 Max: 64 00:13:34.413 Min: 64 00:13:34.413 Completion Queue Entry Size 00:13:34.413 Max: 16 00:13:34.413 Min: 16 00:13:34.413 Number of Namespaces: 32 00:13:34.413 Compare Command: Supported 00:13:34.413 Write Uncorrectable Command: Not Supported 00:13:34.413 Dataset Management Command: Supported 00:13:34.413 Write Zeroes Command: Supported 00:13:34.413 Set Features Save Field: Not Supported 00:13:34.413 Reservations: Not Supported 00:13:34.413 Timestamp: Not Supported 00:13:34.413 Copy: Supported 00:13:34.413 Volatile Write Cache: Present 00:13:34.413 Atomic Write Unit (Normal): 1 00:13:34.413 Atomic Write Unit (PFail): 1 00:13:34.413 Atomic Compare & Write Unit: 1 00:13:34.413 Fused Compare & Write: Supported 00:13:34.413 Scatter-Gather List 00:13:34.413 SGL Command Set: Supported (Dword aligned) 00:13:34.413 SGL Keyed: Not Supported 00:13:34.413 SGL Bit Bucket Descriptor: Not Supported 00:13:34.413 SGL Metadata Pointer: Not Supported 00:13:34.413 Oversized SGL: Not Supported 00:13:34.413 SGL Metadata Address: Not Supported 00:13:34.413 SGL Offset: Not Supported 00:13:34.413 Transport SGL Data Block: Not Supported 00:13:34.413 Replay Protected Memory Block: Not Supported 00:13:34.413 00:13:34.413 Firmware Slot Information 00:13:34.413 ========================= 00:13:34.413 Active slot: 1 00:13:34.413 Slot 1 Firmware Revision: 25.01 00:13:34.413 00:13:34.413 00:13:34.413 Commands Supported and Effects 00:13:34.413 ============================== 00:13:34.413 Admin Commands 00:13:34.413 -------------- 00:13:34.413 Get Log Page (02h): Supported 00:13:34.413 Identify (06h): Supported 00:13:34.413 Abort (08h): Supported 00:13:34.413 Set Features (09h): Supported 00:13:34.413 Get Features (0Ah): Supported 00:13:34.413 Asynchronous Event Request (0Ch): Supported 00:13:34.413 Keep Alive (18h): Supported 00:13:34.413 I/O Commands 00:13:34.413 ------------ 00:13:34.413 Flush (00h): Supported LBA-Change 00:13:34.413 Write (01h): Supported LBA-Change 00:13:34.413 Read (02h): Supported 00:13:34.413 Compare (05h): Supported 00:13:34.413 Write Zeroes (08h): Supported LBA-Change 00:13:34.413 Dataset Management (09h): Supported LBA-Change 00:13:34.413 Copy (19h): Supported LBA-Change 00:13:34.413 00:13:34.413 Error Log 00:13:34.413 ========= 00:13:34.413 00:13:34.413 Arbitration 00:13:34.413 =========== 00:13:34.413 Arbitration Burst: 1 00:13:34.413 00:13:34.413 Power Management 00:13:34.413 ================ 00:13:34.413 Number of Power States: 1 00:13:34.413 Current Power State: Power State #0 00:13:34.413 Power State #0: 00:13:34.413 Max Power: 0.00 W 00:13:34.413 Non-Operational State: Operational 00:13:34.413 Entry Latency: Not Reported 00:13:34.413 Exit Latency: Not Reported 00:13:34.413 Relative Read Throughput: 0 00:13:34.414 Relative Read Latency: 0 00:13:34.414 Relative Write Throughput: 0 00:13:34.414 Relative Write Latency: 0 00:13:34.414 Idle Power: Not Reported 00:13:34.414 Active Power: Not Reported 00:13:34.414 Non-Operational Permissive Mode: Not Supported 00:13:34.414 00:13:34.414 Health Information 00:13:34.414 ================== 00:13:34.414 Critical Warnings: 00:13:34.414 Available Spare Space: OK 00:13:34.414 Temperature: OK 00:13:34.414 Device Reliability: OK 00:13:34.414 Read Only: No 00:13:34.414 Volatile Memory Backup: OK 00:13:34.414 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:34.414 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:34.414 Available Spare: 0% 00:13:34.414 Available Sp[2024-10-30 12:25:07.062447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:34.414 [2024-10-30 12:25:07.069411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:34.414 [2024-10-30 12:25:07.069467] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:34.414 [2024-10-30 12:25:07.069486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.414 [2024-10-30 12:25:07.069496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.414 [2024-10-30 12:25:07.069506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.414 [2024-10-30 12:25:07.069520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.414 [2024-10-30 12:25:07.069590] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:34.414 [2024-10-30 12:25:07.069612] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:34.414 [2024-10-30 12:25:07.070608] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:34.414 [2024-10-30 12:25:07.070693] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:34.414 [2024-10-30 12:25:07.070708] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:34.414 [2024-10-30 12:25:07.071615] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:34.414 [2024-10-30 12:25:07.071639] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:34.414 [2024-10-30 12:25:07.071690] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:34.414 [2024-10-30 12:25:07.074267] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:34.672 are Threshold: 0% 00:13:34.672 Life Percentage Used: 0% 00:13:34.672 Data Units Read: 0 00:13:34.672 Data Units Written: 0 00:13:34.672 Host Read Commands: 0 00:13:34.672 Host Write Commands: 0 00:13:34.672 Controller Busy Time: 0 minutes 00:13:34.672 Power Cycles: 0 00:13:34.672 Power On Hours: 0 hours 00:13:34.672 Unsafe Shutdowns: 0 00:13:34.672 Unrecoverable Media Errors: 0 00:13:34.672 Lifetime Error Log Entries: 0 00:13:34.672 Warning Temperature Time: 0 minutes 00:13:34.672 Critical Temperature Time: 0 minutes 00:13:34.672 00:13:34.672 Number of Queues 00:13:34.672 ================ 00:13:34.672 Number of I/O Submission Queues: 127 00:13:34.672 Number of I/O Completion Queues: 127 00:13:34.672 00:13:34.672 Active Namespaces 00:13:34.672 ================= 00:13:34.672 Namespace ID:1 00:13:34.672 Error Recovery Timeout: Unlimited 00:13:34.672 Command Set Identifier: NVM (00h) 00:13:34.672 Deallocate: Supported 00:13:34.672 Deallocated/Unwritten Error: Not Supported 00:13:34.672 Deallocated Read Value: Unknown 00:13:34.672 Deallocate in Write Zeroes: Not Supported 00:13:34.672 Deallocated Guard Field: 0xFFFF 00:13:34.672 Flush: Supported 00:13:34.672 Reservation: Supported 00:13:34.672 Namespace Sharing Capabilities: Multiple Controllers 00:13:34.672 Size (in LBAs): 131072 (0GiB) 00:13:34.672 Capacity (in LBAs): 131072 (0GiB) 00:13:34.672 Utilization (in LBAs): 131072 (0GiB) 00:13:34.672 NGUID: BB355CF3EF2C4C74B22AD2DAB585F90D 00:13:34.672 UUID: bb355cf3-ef2c-4c74-b22a-d2dab585f90d 00:13:34.672 Thin Provisioning: Not Supported 00:13:34.672 Per-NS Atomic Units: Yes 00:13:34.672 Atomic Boundary Size (Normal): 0 00:13:34.672 Atomic Boundary Size (PFail): 0 00:13:34.672 Atomic Boundary Offset: 0 00:13:34.672 Maximum Single Source Range Length: 65535 00:13:34.672 Maximum Copy Length: 65535 00:13:34.672 Maximum Source Range Count: 1 00:13:34.672 NGUID/EUI64 Never Reused: No 00:13:34.672 Namespace Write Protected: No 00:13:34.672 Number of LBA Formats: 1 00:13:34.672 Current LBA Format: LBA Format #00 00:13:34.672 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:34.672 00:13:34.672 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:34.672 [2024-10-30 12:25:07.322059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:39.930 Initializing NVMe Controllers 00:13:39.930 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:39.930 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:39.930 Initialization complete. Launching workers. 00:13:39.930 ======================================================== 00:13:39.930 Latency(us) 00:13:39.930 Device Information : IOPS MiB/s Average min max 00:13:39.930 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33819.34 132.11 3783.95 1173.26 7415.79 00:13:39.930 ======================================================== 00:13:39.930 Total : 33819.34 132.11 3783.95 1173.26 7415.79 00:13:39.930 00:13:39.930 [2024-10-30 12:25:12.425626] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:39.930 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:40.188 [2024-10-30 12:25:12.684350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:45.455 Initializing NVMe Controllers 00:13:45.455 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:45.455 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:45.455 Initialization complete. Launching workers. 00:13:45.455 ======================================================== 00:13:45.455 Latency(us) 00:13:45.455 Device Information : IOPS MiB/s Average min max 00:13:45.455 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32448.57 126.75 3944.34 1181.38 7580.41 00:13:45.455 ======================================================== 00:13:45.455 Total : 32448.57 126.75 3944.34 1181.38 7580.41 00:13:45.455 00:13:45.455 [2024-10-30 12:25:17.706866] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.455 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:45.455 [2024-10-30 12:25:17.939013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:50.786 [2024-10-30 12:25:23.076398] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:50.786 Initializing NVMe Controllers 00:13:50.786 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.786 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.786 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:50.786 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:50.786 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:50.786 Initialization complete. Launching workers. 00:13:50.786 Starting thread on core 2 00:13:50.786 Starting thread on core 3 00:13:50.786 Starting thread on core 1 00:13:50.786 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:50.786 [2024-10-30 12:25:23.387799] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.171 [2024-10-30 12:25:26.573553] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.171 Initializing NVMe Controllers 00:13:54.171 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.171 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.171 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:54.171 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:54.171 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:54.171 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:54.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:54.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:54.171 Initialization complete. Launching workers. 00:13:54.171 Starting thread on core 1 with urgent priority queue 00:13:54.171 Starting thread on core 2 with urgent priority queue 00:13:54.171 Starting thread on core 3 with urgent priority queue 00:13:54.171 Starting thread on core 0 with urgent priority queue 00:13:54.171 SPDK bdev Controller (SPDK2 ) core 0: 3389.00 IO/s 29.51 secs/100000 ios 00:13:54.171 SPDK bdev Controller (SPDK2 ) core 1: 3729.67 IO/s 26.81 secs/100000 ios 00:13:54.171 SPDK bdev Controller (SPDK2 ) core 2: 3726.33 IO/s 26.84 secs/100000 ios 00:13:54.171 SPDK bdev Controller (SPDK2 ) core 3: 3381.33 IO/s 29.57 secs/100000 ios 00:13:54.171 ======================================================== 00:13:54.171 00:13:54.171 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:54.429 [2024-10-30 12:25:26.892772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.429 Initializing NVMe Controllers 00:13:54.429 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.429 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.429 Namespace ID: 1 size: 0GB 00:13:54.429 Initialization complete. 00:13:54.429 INFO: using host memory buffer for IO 00:13:54.429 Hello world! 00:13:54.429 [2024-10-30 12:25:26.904990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.429 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:54.686 [2024-10-30 12:25:27.218578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.650 Initializing NVMe Controllers 00:13:55.650 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.650 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.650 Initialization complete. Launching workers. 00:13:55.650 submit (in ns) avg, min, max = 7541.7, 3501.1, 4030002.2 00:13:55.650 complete (in ns) avg, min, max = 22797.9, 2055.6, 4028122.2 00:13:55.650 00:13:55.650 Submit histogram 00:13:55.650 ================ 00:13:55.650 Range in us Cumulative Count 00:13:55.650 3.484 - 3.508: 0.0151% ( 2) 00:13:55.650 3.508 - 3.532: 0.3851% ( 49) 00:13:55.650 3.532 - 3.556: 2.3707% ( 263) 00:13:55.650 3.556 - 3.579: 5.0887% ( 360) 00:13:55.650 3.579 - 3.603: 11.7403% ( 881) 00:13:55.650 3.603 - 3.627: 20.6871% ( 1185) 00:13:55.650 3.627 - 3.650: 31.6950% ( 1458) 00:13:55.650 3.650 - 3.674: 39.6602% ( 1055) 00:13:55.650 3.674 - 3.698: 46.9158% ( 961) 00:13:55.650 3.698 - 3.721: 52.8275% ( 783) 00:13:55.650 3.721 - 3.745: 56.8818% ( 537) 00:13:55.650 3.745 - 3.769: 61.1099% ( 560) 00:13:55.650 3.769 - 3.793: 64.5225% ( 452) 00:13:55.650 3.793 - 3.816: 68.0710% ( 470) 00:13:55.650 3.816 - 3.840: 71.3628% ( 436) 00:13:55.650 3.840 - 3.864: 75.7871% ( 586) 00:13:55.650 3.864 - 3.887: 79.8112% ( 533) 00:13:55.650 3.887 - 3.911: 83.4202% ( 478) 00:13:55.650 3.911 - 3.935: 86.0098% ( 343) 00:13:55.650 3.935 - 3.959: 87.7841% ( 235) 00:13:55.650 3.959 - 3.982: 89.3394% ( 206) 00:13:55.650 3.982 - 4.006: 90.9626% ( 215) 00:13:55.650 4.006 - 4.030: 92.2612% ( 172) 00:13:55.650 4.030 - 4.053: 93.1899% ( 123) 00:13:55.650 4.053 - 4.077: 94.0959% ( 120) 00:13:55.650 4.077 - 4.101: 94.9792% ( 117) 00:13:55.650 4.101 - 4.124: 95.5077% ( 70) 00:13:55.650 4.124 - 4.148: 95.9758% ( 62) 00:13:55.650 4.148 - 4.172: 96.3005% ( 43) 00:13:55.650 4.172 - 4.196: 96.5119% ( 28) 00:13:55.650 4.196 - 4.219: 96.6855% ( 23) 00:13:55.650 4.219 - 4.243: 96.7837% ( 13) 00:13:55.650 4.243 - 4.267: 96.9271% ( 19) 00:13:55.650 4.267 - 4.290: 97.1083% ( 24) 00:13:55.650 4.290 - 4.314: 97.1838% ( 10) 00:13:55.650 4.314 - 4.338: 97.2442% ( 8) 00:13:55.650 4.338 - 4.361: 97.2971% ( 7) 00:13:55.650 4.361 - 4.385: 97.3499% ( 7) 00:13:55.650 4.385 - 4.409: 97.4103% ( 8) 00:13:55.650 4.409 - 4.433: 97.4632% ( 7) 00:13:55.650 4.433 - 4.456: 97.4783% ( 2) 00:13:55.650 4.456 - 4.480: 97.4858% ( 1) 00:13:55.650 4.480 - 4.504: 97.5085% ( 3) 00:13:55.650 4.527 - 4.551: 97.5160% ( 1) 00:13:55.650 4.551 - 4.575: 97.5236% ( 1) 00:13:55.650 4.693 - 4.717: 97.5311% ( 1) 00:13:55.650 4.717 - 4.741: 97.5387% ( 1) 00:13:55.650 4.788 - 4.812: 97.5613% ( 3) 00:13:55.650 4.812 - 4.836: 97.6066% ( 6) 00:13:55.650 4.836 - 4.859: 97.6444% ( 5) 00:13:55.650 4.859 - 4.883: 97.6746% ( 4) 00:13:55.650 4.883 - 4.907: 97.7274% ( 7) 00:13:55.650 4.907 - 4.930: 97.8105% ( 11) 00:13:55.650 4.930 - 4.954: 97.8709% ( 8) 00:13:55.650 4.954 - 4.978: 97.9086% ( 5) 00:13:55.650 4.978 - 5.001: 97.9992% ( 12) 00:13:55.650 5.001 - 5.025: 98.0521% ( 7) 00:13:55.650 5.025 - 5.049: 98.1049% ( 7) 00:13:55.650 5.049 - 5.073: 98.1804% ( 10) 00:13:55.650 5.073 - 5.096: 98.2106% ( 4) 00:13:55.650 5.096 - 5.120: 98.2559% ( 6) 00:13:55.650 5.120 - 5.144: 98.2786% ( 3) 00:13:55.650 5.144 - 5.167: 98.3088% ( 4) 00:13:55.650 5.167 - 5.191: 98.3314% ( 3) 00:13:55.650 5.191 - 5.215: 98.3767% ( 6) 00:13:55.650 5.215 - 5.239: 98.4220% ( 6) 00:13:55.650 5.239 - 5.262: 98.4371% ( 2) 00:13:55.650 5.262 - 5.286: 98.4522% ( 2) 00:13:55.650 5.310 - 5.333: 98.4673% ( 2) 00:13:55.650 5.333 - 5.357: 98.4749% ( 1) 00:13:55.650 5.357 - 5.381: 98.4900% ( 2) 00:13:55.650 5.381 - 5.404: 98.4975% ( 1) 00:13:55.650 5.404 - 5.428: 98.5353% ( 5) 00:13:55.650 5.428 - 5.452: 98.5579% ( 3) 00:13:55.650 5.476 - 5.499: 98.5730% ( 2) 00:13:55.650 5.499 - 5.523: 98.5806% ( 1) 00:13:55.650 5.523 - 5.547: 98.5957% ( 2) 00:13:55.650 5.570 - 5.594: 98.6032% ( 1) 00:13:55.650 5.594 - 5.618: 98.6108% ( 1) 00:13:55.650 5.689 - 5.713: 98.6183% ( 1) 00:13:55.650 5.879 - 5.902: 98.6259% ( 1) 00:13:55.650 5.902 - 5.926: 98.6334% ( 1) 00:13:55.650 5.997 - 6.021: 98.6410% ( 1) 00:13:55.650 6.068 - 6.116: 98.6485% ( 1) 00:13:55.650 6.116 - 6.163: 98.6561% ( 1) 00:13:55.650 6.210 - 6.258: 98.6636% ( 1) 00:13:55.650 6.542 - 6.590: 98.6712% ( 1) 00:13:55.650 6.779 - 6.827: 98.6787% ( 1) 00:13:55.650 6.827 - 6.874: 98.6863% ( 1) 00:13:55.650 6.921 - 6.969: 98.6938% ( 1) 00:13:55.650 6.969 - 7.016: 98.7014% ( 1) 00:13:55.650 7.301 - 7.348: 98.7089% ( 1) 00:13:55.650 7.633 - 7.680: 98.7165% ( 1) 00:13:55.650 7.680 - 7.727: 98.7240% ( 1) 00:13:55.650 7.870 - 7.917: 98.7391% ( 2) 00:13:55.650 7.964 - 8.012: 98.7542% ( 2) 00:13:55.650 8.154 - 8.201: 98.7618% ( 1) 00:13:55.650 8.249 - 8.296: 98.7693% ( 1) 00:13:55.650 8.296 - 8.344: 98.7769% ( 1) 00:13:55.650 8.486 - 8.533: 98.7844% ( 1) 00:13:55.650 8.581 - 8.628: 98.7995% ( 2) 00:13:55.650 8.628 - 8.676: 98.8146% ( 2) 00:13:55.650 8.676 - 8.723: 98.8222% ( 1) 00:13:55.650 8.818 - 8.865: 98.8297% ( 1) 00:13:55.650 8.865 - 8.913: 98.8448% ( 2) 00:13:55.650 8.960 - 9.007: 98.8524% ( 1) 00:13:55.650 9.055 - 9.102: 98.8599% ( 1) 00:13:55.650 9.197 - 9.244: 98.8675% ( 1) 00:13:55.650 9.339 - 9.387: 98.8826% ( 2) 00:13:55.650 9.434 - 9.481: 98.8901% ( 1) 00:13:55.650 9.481 - 9.529: 98.8977% ( 1) 00:13:55.650 9.529 - 9.576: 98.9052% ( 1) 00:13:55.650 9.766 - 9.813: 98.9203% ( 2) 00:13:55.650 9.813 - 9.861: 98.9279% ( 1) 00:13:55.650 10.050 - 10.098: 98.9354% ( 1) 00:13:55.650 10.524 - 10.572: 98.9430% ( 1) 00:13:55.650 10.856 - 10.904: 98.9505% ( 1) 00:13:55.650 11.046 - 11.093: 98.9581% ( 1) 00:13:55.650 11.662 - 11.710: 98.9656% ( 1) 00:13:55.650 11.710 - 11.757: 98.9732% ( 1) 00:13:55.650 12.041 - 12.089: 98.9807% ( 1) 00:13:55.650 12.089 - 12.136: 98.9883% ( 1) 00:13:55.650 12.136 - 12.231: 98.9958% ( 1) 00:13:55.650 12.421 - 12.516: 99.0109% ( 2) 00:13:55.650 13.179 - 13.274: 99.0260% ( 2) 00:13:55.650 13.464 - 13.559: 99.0336% ( 1) 00:13:55.650 14.507 - 14.601: 99.0411% ( 1) 00:13:55.650 14.981 - 15.076: 99.0487% ( 1) 00:13:55.650 15.644 - 15.739: 99.0562% ( 1) 00:13:55.650 16.972 - 17.067: 99.0638% ( 1) 00:13:55.650 17.067 - 17.161: 99.0713% ( 1) 00:13:55.650 17.256 - 17.351: 99.0789% ( 1) 00:13:55.650 17.351 - 17.446: 99.1091% ( 4) 00:13:55.650 17.446 - 17.541: 99.1468% ( 5) 00:13:55.650 17.541 - 17.636: 99.1846% ( 5) 00:13:55.650 17.636 - 17.730: 99.2450% ( 8) 00:13:55.650 17.730 - 17.825: 99.3054% ( 8) 00:13:55.650 17.825 - 17.920: 99.3507% ( 6) 00:13:55.650 17.920 - 18.015: 99.3960% ( 6) 00:13:55.650 18.015 - 18.110: 99.4866% ( 12) 00:13:55.650 18.110 - 18.204: 99.5696% ( 11) 00:13:55.650 18.204 - 18.299: 99.6149% ( 6) 00:13:55.650 18.299 - 18.394: 99.6527% ( 5) 00:13:55.650 18.394 - 18.489: 99.6980% ( 6) 00:13:55.650 18.489 - 18.584: 99.7055% ( 1) 00:13:55.650 18.584 - 18.679: 99.7735% ( 9) 00:13:55.650 18.679 - 18.773: 99.7886% ( 2) 00:13:55.650 18.773 - 18.868: 99.8188% ( 4) 00:13:55.650 18.868 - 18.963: 99.8414% ( 3) 00:13:55.650 18.963 - 19.058: 99.8565% ( 2) 00:13:55.650 19.247 - 19.342: 99.8641% ( 1) 00:13:55.650 19.342 - 19.437: 99.8792% ( 2) 00:13:55.650 19.721 - 19.816: 99.8867% ( 1) 00:13:55.650 23.040 - 23.135: 99.8943% ( 1) 00:13:55.650 23.230 - 23.324: 99.9018% ( 1) 00:13:55.650 29.203 - 29.393: 99.9094% ( 1) 00:13:55.650 3980.705 - 4004.978: 99.9698% ( 8) 00:13:55.650 4004.978 - 4029.250: 99.9924% ( 3) 00:13:55.650 4029.250 - 4053.523: 100.0000% ( 1) 00:13:55.650 00:13:55.650 Complete histogram 00:13:55.650 ================== 00:13:55.650 Range in us Cumulative Count 00:13:55.650 2.050 - 2.062: 0.7173% ( 95) 00:13:55.650 2.062 - 2.074: 23.9487% ( 3077) 00:13:55.650 2.074 - 2.086: 31.6044% ( 1014) 00:13:55.650 2.086 - 2.098: 36.8365% ( 693) 00:13:55.651 2.098 - 2.110: 55.1529% ( 2426) 00:13:55.651 2.110 - 2.121: 58.8146% ( 485) 00:13:55.651 2.121 - 2.133: 62.9294% ( 545) 00:13:55.651 2.133 - 2.145: 70.8947% ( 1055) 00:13:55.651 2.145 - 2.157: 72.3292% ( 190) 00:13:55.651 2.157 - 2.169: 75.8777% ( 470) 00:13:55.651 2.169 - 2.181: 80.4757% ( 609) 00:13:55.651 2.181 - 2.193: 81.4798% ( 133) 00:13:55.651 2.193 - 2.204: 82.7935% ( 174) 00:13:55.651 2.204 - 2.216: 86.1080% ( 439) 00:13:55.651 2.216 - 2.228: 88.4636% ( 312) 00:13:55.651 2.228 - 2.240: 90.0642% ( 212) 00:13:55.651 2.240 - 2.252: 92.4575% ( 317) 00:13:55.651 2.252 - 2.264: 93.1899% ( 97) 00:13:55.651 2.264 - 2.276: 93.5372% ( 46) 00:13:55.651 2.276 - 2.287: 93.9449% ( 54) 00:13:55.651 2.287 - 2.299: 94.7225% ( 103) 00:13:55.651 2.299 - 2.311: 95.1151% ( 52) 00:13:55.651 2.311 - 2.323: 95.2586% ( 19) 00:13:55.651 2.323 - 2.335: 95.3869% ( 17) 00:13:55.651 2.335 - 2.347: 95.4851% ( 13) 00:13:55.651 2.347 - 2.359: 95.5379% ( 7) 00:13:55.651 2.359 - 2.370: 95.7418% ( 27) 00:13:55.651 2.370 - 2.382: 96.0815% ( 45) 00:13:55.651 2.382 - 2.394: 96.3760% ( 39) 00:13:55.651 2.394 - 2.406: 96.6478% ( 36) 00:13:55.651 2.406 - 2.418: 96.8667% ( 29) 00:13:55.651 2.418 - 2.430: 97.0102% ( 19) 00:13:55.651 2.430 - 2.441: 97.2367% ( 30) 00:13:55.651 2.441 - 2.453: 97.4028% ( 22) 00:13:55.651 2.453 - 2.465: 97.5462% ( 19) 00:13:55.651 2.465 - 2.477: 97.7048% ( 21) 00:13:55.651 2.477 - 2.489: 97.8558% ( 20) 00:13:55.651 2.489 - 2.501: 97.9464% ( 12) 00:13:55.651 2.501 - 2.513: 98.0219% ( 10) 00:13:55.651 2.513 - 2.524: 98.0672% ( 6) 00:13:55.651 2.524 - 2.536: 98.0898% ( 3) 00:13:55.651 2.536 - 2.548: 98.0974% ( 1) 00:13:55.651 2.548 - 2.560: 98.1200% ( 3) 00:13:55.651 2.560 - 2.572: 98.1578% ( 5) 00:13:55.651 2.584 - 2.596: 98.1729% ( 2) 00:13:55.651 2.596 - 2.607: 98.1804% ( 1) 00:13:55.651 2.619 - 2.631: 98.1955% ( 2) 00:13:55.651 2.631 - 2.643: 98.2106% ( 2) 00:13:55.651 2.667 - 2.679: 98.2257% ( 2) 00:13:55.651 2.679 - 2.690: 98.2408% ( 2) 00:13:55.651 2.690 - 2.702: 98.2484% ( 1) 00:13:55.651 2.702 - 2.714: 98.2559% ( 1) 00:13:55.651 2.714 - 2.726: 98.2635% ( 1) 00:13:55.651 2.761 - 2.773: 98.2710% ( 1) 00:13:55.651 2.797 - 2.809: 98.2786% ( 1) 00:13:55.651 2.833 - 2.844: 98.2861% ( 1) 00:13:55.651 2.844 - 2.856: 98.3088% ( 3) 00:13:55.651 2.856 - 2.868: 98.3163% ( 1) 00:13:55.651 2.880 - 2.892: 98.3239% ( 1) 00:13:55.651 2.951 - 2.963: 98.3314% ( 1) 00:13:55.651 2.963 - 2.975: 98.3390% ( 1) 00:13:55.651 2.975 - 2.987: 98.3465% ( 1) 00:13:55.651 2.987 - 2.999: 98.3541% ( 1) 00:13:55.651 2.999 - 3.010: 98.3616% ( 1) 00:13:55.651 3.010 - 3.022: 98.3692% ( 1) 00:13:55.651 3.022 - 3.034: 98.3843% ( 2) 00:13:55.651 3.034 - 3.058: 98.3994% ( 2) 00:13:55.651 3.058 - 3.081: 98.4296% ( 4) 00:13:55.651 3.081 - 3.105: 98.4447% ( 2) 00:13:55.651 3.129 - 3.153: 98.4522% ( 1) 00:13:55.651 3.176 - 3.200: 98.4749% ( 3) 00:13:55.651 3.224 - 3.247: 98.4900% ( 2) 00:13:55.651 3.271 - 3.295: 98.5051% ( 2) 00:13:55.651 3.295 - 3.319: 98.5126% ( 1) 00:13:55.651 3.319 - 3.342: 98.5202% ( 1) 00:13:55.651 3.366 - 3.390: 98.5277% ( 1) 00:13:55.651 3.390 - 3.413: 98.5353% ( 1) 00:13:55.651 3.484 - 3.508: 98.5428% ( 1) 00:13:55.651 3.508 - 3.532: 98.5579% ( 2) 00:13:55.651 3.556 - 3.579: 98.5806% ( 3) 00:13:55.651 3.579 - 3.603: 98.6108% ( 4) 00:13:55.651 3.603 - 3.627: 98.6183% ( 1) 00:13:55.651 3.627 - 3.650: 98.6259% ( 1) 00:13:55.651 3.650 - 3.674: 98.6410% ( 2) 00:13:55.651 3.698 - 3.721: 98.6485% ( 1) 00:13:55.651 3.745 - 3.769: 98.6787% ( 4) 00:13:55.651 3.769 - 3.793: 98.6938% ( 2) 00:13:55.651 3.840 - 3.864: 98.7089% ( 2) 00:13:55.651 3.887 - 3.911: 98.7316% ( 3) 00:13:55.651 3.911 - 3.935: 98.7391% ( 1) 00:13:55.651 3.959 - 3.982: 98.7542% ( 2) 00:13:55.651 4.053 - 4.077: 98.7693% ( 2) 00:13:55.651 4.101 - 4.124: 9[2024-10-30 12:25:28.319051] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.909 8.7769% ( 1) 00:13:55.909 4.148 - 4.172: 98.7844% ( 1) 00:13:55.909 4.172 - 4.196: 98.7995% ( 2) 00:13:55.909 4.219 - 4.243: 98.8071% ( 1) 00:13:55.909 4.267 - 4.290: 98.8146% ( 1) 00:13:55.909 4.290 - 4.314: 98.8222% ( 1) 00:13:55.909 4.504 - 4.527: 98.8297% ( 1) 00:13:55.909 5.523 - 5.547: 98.8373% ( 1) 00:13:55.909 5.736 - 5.760: 98.8448% ( 1) 00:13:55.909 5.879 - 5.902: 98.8524% ( 1) 00:13:55.909 5.973 - 5.997: 98.8599% ( 1) 00:13:55.909 6.495 - 6.542: 98.8675% ( 1) 00:13:55.909 6.684 - 6.732: 98.8750% ( 1) 00:13:55.909 6.779 - 6.827: 98.8826% ( 1) 00:13:55.909 6.827 - 6.874: 98.8977% ( 2) 00:13:55.909 6.921 - 6.969: 98.9052% ( 1) 00:13:55.909 7.016 - 7.064: 98.9128% ( 1) 00:13:55.909 7.064 - 7.111: 98.9279% ( 2) 00:13:55.909 7.727 - 7.775: 98.9354% ( 1) 00:13:55.909 7.775 - 7.822: 98.9430% ( 1) 00:13:55.909 7.964 - 8.012: 98.9505% ( 1) 00:13:55.909 8.154 - 8.201: 98.9656% ( 2) 00:13:55.909 8.391 - 8.439: 98.9732% ( 1) 00:13:55.909 9.055 - 9.102: 98.9807% ( 1) 00:13:55.909 9.102 - 9.150: 98.9883% ( 1) 00:13:55.909 10.003 - 10.050: 98.9958% ( 1) 00:13:55.909 10.145 - 10.193: 99.0034% ( 1) 00:13:55.909 15.265 - 15.360: 99.0109% ( 1) 00:13:55.909 15.360 - 15.455: 99.0185% ( 1) 00:13:55.909 15.644 - 15.739: 99.0487% ( 4) 00:13:55.909 15.739 - 15.834: 99.0638% ( 2) 00:13:55.909 15.834 - 15.929: 99.0789% ( 2) 00:13:55.909 15.929 - 16.024: 99.0940% ( 2) 00:13:55.909 16.024 - 16.119: 99.1242% ( 4) 00:13:55.909 16.119 - 16.213: 99.1393% ( 2) 00:13:55.909 16.213 - 16.308: 99.1695% ( 4) 00:13:55.909 16.308 - 16.403: 99.1846% ( 2) 00:13:55.909 16.403 - 16.498: 99.1997% ( 2) 00:13:55.909 16.498 - 16.593: 99.2601% ( 8) 00:13:55.909 16.593 - 16.687: 99.2978% ( 5) 00:13:55.909 16.687 - 16.782: 99.3356% ( 5) 00:13:55.909 16.782 - 16.877: 99.3507% ( 2) 00:13:55.909 16.877 - 16.972: 99.3733% ( 3) 00:13:55.909 16.972 - 17.067: 99.3884% ( 2) 00:13:55.909 17.067 - 17.161: 99.3960% ( 1) 00:13:55.909 17.256 - 17.351: 99.4035% ( 1) 00:13:55.909 17.351 - 17.446: 99.4186% ( 2) 00:13:55.909 17.446 - 17.541: 99.4337% ( 2) 00:13:55.909 17.541 - 17.636: 99.4413% ( 1) 00:13:55.909 18.204 - 18.299: 99.4488% ( 1) 00:13:55.909 18.299 - 18.394: 99.4564% ( 1) 00:13:55.909 18.394 - 18.489: 99.4639% ( 1) 00:13:55.909 18.489 - 18.584: 99.4715% ( 1) 00:13:55.909 18.773 - 18.868: 99.4790% ( 1) 00:13:55.909 597.713 - 600.747: 99.4866% ( 1) 00:13:55.909 3980.705 - 4004.978: 99.9471% ( 61) 00:13:55.909 4004.978 - 4029.250: 100.0000% ( 7) 00:13:55.909 00:13:55.910 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:55.910 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:55.910 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:55.910 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:55.910 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:56.168 [ 00:13:56.168 { 00:13:56.168 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:56.168 "subtype": "Discovery", 00:13:56.168 "listen_addresses": [], 00:13:56.168 "allow_any_host": true, 00:13:56.168 "hosts": [] 00:13:56.168 }, 00:13:56.168 { 00:13:56.168 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:56.168 "subtype": "NVMe", 00:13:56.168 "listen_addresses": [ 00:13:56.168 { 00:13:56.168 "trtype": "VFIOUSER", 00:13:56.168 "adrfam": "IPv4", 00:13:56.168 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:56.168 "trsvcid": "0" 00:13:56.168 } 00:13:56.168 ], 00:13:56.168 "allow_any_host": true, 00:13:56.168 "hosts": [], 00:13:56.168 "serial_number": "SPDK1", 00:13:56.168 "model_number": "SPDK bdev Controller", 00:13:56.168 "max_namespaces": 32, 00:13:56.168 "min_cntlid": 1, 00:13:56.168 "max_cntlid": 65519, 00:13:56.168 "namespaces": [ 00:13:56.168 { 00:13:56.168 "nsid": 1, 00:13:56.168 "bdev_name": "Malloc1", 00:13:56.168 "name": "Malloc1", 00:13:56.168 "nguid": "094F67FCF495437EBDA735CC01B6FFA8", 00:13:56.168 "uuid": "094f67fc-f495-437e-bda7-35cc01b6ffa8" 00:13:56.168 }, 00:13:56.168 { 00:13:56.168 "nsid": 2, 00:13:56.168 "bdev_name": "Malloc3", 00:13:56.168 "name": "Malloc3", 00:13:56.168 "nguid": "AB79771920594C41A285B678BABD84B1", 00:13:56.168 "uuid": "ab797719-2059-4c41-a285-b678babd84b1" 00:13:56.168 } 00:13:56.168 ] 00:13:56.168 }, 00:13:56.168 { 00:13:56.168 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:56.168 "subtype": "NVMe", 00:13:56.168 "listen_addresses": [ 00:13:56.168 { 00:13:56.168 "trtype": "VFIOUSER", 00:13:56.168 "adrfam": "IPv4", 00:13:56.168 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:56.168 "trsvcid": "0" 00:13:56.168 } 00:13:56.168 ], 00:13:56.168 "allow_any_host": true, 00:13:56.168 "hosts": [], 00:13:56.168 "serial_number": "SPDK2", 00:13:56.168 "model_number": "SPDK bdev Controller", 00:13:56.168 "max_namespaces": 32, 00:13:56.168 "min_cntlid": 1, 00:13:56.168 "max_cntlid": 65519, 00:13:56.168 "namespaces": [ 00:13:56.168 { 00:13:56.168 "nsid": 1, 00:13:56.168 "bdev_name": "Malloc2", 00:13:56.168 "name": "Malloc2", 00:13:56.168 "nguid": "BB355CF3EF2C4C74B22AD2DAB585F90D", 00:13:56.168 "uuid": "bb355cf3-ef2c-4c74-b22a-d2dab585f90d" 00:13:56.168 } 00:13:56.168 ] 00:13:56.168 } 00:13:56.168 ] 00:13:56.168 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:56.168 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=591085 00:13:56.168 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:56.168 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:56.168 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:13:56.168 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:56.168 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:56.168 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:13:56.168 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:56.168 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:56.426 [2024-10-30 12:25:28.864165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.426 Malloc4 00:13:56.426 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:56.685 [2024-10-30 12:25:29.256235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.685 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:56.685 Asynchronous Event Request test 00:13:56.685 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.685 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.685 Registering asynchronous event callbacks... 00:13:56.685 Starting namespace attribute notice tests for all controllers... 00:13:56.685 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:56.685 aer_cb - Changed Namespace 00:13:56.685 Cleaning up... 00:13:56.944 [ 00:13:56.944 { 00:13:56.944 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:56.944 "subtype": "Discovery", 00:13:56.944 "listen_addresses": [], 00:13:56.944 "allow_any_host": true, 00:13:56.944 "hosts": [] 00:13:56.944 }, 00:13:56.944 { 00:13:56.944 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:56.944 "subtype": "NVMe", 00:13:56.944 "listen_addresses": [ 00:13:56.944 { 00:13:56.944 "trtype": "VFIOUSER", 00:13:56.944 "adrfam": "IPv4", 00:13:56.944 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:56.944 "trsvcid": "0" 00:13:56.944 } 00:13:56.944 ], 00:13:56.944 "allow_any_host": true, 00:13:56.944 "hosts": [], 00:13:56.944 "serial_number": "SPDK1", 00:13:56.944 "model_number": "SPDK bdev Controller", 00:13:56.944 "max_namespaces": 32, 00:13:56.944 "min_cntlid": 1, 00:13:56.944 "max_cntlid": 65519, 00:13:56.944 "namespaces": [ 00:13:56.944 { 00:13:56.944 "nsid": 1, 00:13:56.944 "bdev_name": "Malloc1", 00:13:56.944 "name": "Malloc1", 00:13:56.944 "nguid": "094F67FCF495437EBDA735CC01B6FFA8", 00:13:56.944 "uuid": "094f67fc-f495-437e-bda7-35cc01b6ffa8" 00:13:56.944 }, 00:13:56.944 { 00:13:56.944 "nsid": 2, 00:13:56.944 "bdev_name": "Malloc3", 00:13:56.944 "name": "Malloc3", 00:13:56.944 "nguid": "AB79771920594C41A285B678BABD84B1", 00:13:56.944 "uuid": "ab797719-2059-4c41-a285-b678babd84b1" 00:13:56.944 } 00:13:56.944 ] 00:13:56.944 }, 00:13:56.944 { 00:13:56.944 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:56.944 "subtype": "NVMe", 00:13:56.944 "listen_addresses": [ 00:13:56.944 { 00:13:56.944 "trtype": "VFIOUSER", 00:13:56.944 "adrfam": "IPv4", 00:13:56.944 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:56.944 "trsvcid": "0" 00:13:56.944 } 00:13:56.944 ], 00:13:56.944 "allow_any_host": true, 00:13:56.944 "hosts": [], 00:13:56.944 "serial_number": "SPDK2", 00:13:56.944 "model_number": "SPDK bdev Controller", 00:13:56.944 "max_namespaces": 32, 00:13:56.944 "min_cntlid": 1, 00:13:56.944 "max_cntlid": 65519, 00:13:56.944 "namespaces": [ 00:13:56.944 { 00:13:56.944 "nsid": 1, 00:13:56.944 "bdev_name": "Malloc2", 00:13:56.944 "name": "Malloc2", 00:13:56.944 "nguid": "BB355CF3EF2C4C74B22AD2DAB585F90D", 00:13:56.944 "uuid": "bb355cf3-ef2c-4c74-b22a-d2dab585f90d" 00:13:56.944 }, 00:13:56.944 { 00:13:56.944 "nsid": 2, 00:13:56.944 "bdev_name": "Malloc4", 00:13:56.944 "name": "Malloc4", 00:13:56.944 "nguid": "E5921FFDAA1D415DA61CBC65BFED0D4D", 00:13:56.944 "uuid": "e5921ffd-aa1d-415d-a61c-bc65bfed0d4d" 00:13:56.944 } 00:13:56.944 ] 00:13:56.944 } 00:13:56.944 ] 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 591085 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 585453 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 585453 ']' 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 585453 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 585453 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 585453' 00:13:56.944 killing process with pid 585453 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 585453 00:13:56.944 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 585453 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=591223 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 591223' 00:13:57.509 Process pid: 591223 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 591223 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 591223 ']' 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:57.509 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:57.509 [2024-10-30 12:25:29.968045] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:57.509 [2024-10-30 12:25:29.969078] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:13:57.509 [2024-10-30 12:25:29.969144] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.509 [2024-10-30 12:25:30.037826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.509 [2024-10-30 12:25:30.096843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.509 [2024-10-30 12:25:30.096889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.509 [2024-10-30 12:25:30.096917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.509 [2024-10-30 12:25:30.096929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.509 [2024-10-30 12:25:30.096939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.509 [2024-10-30 12:25:30.098507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.509 [2024-10-30 12:25:30.098536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.509 [2024-10-30 12:25:30.098600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.509 [2024-10-30 12:25:30.098604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.509 [2024-10-30 12:25:30.188367] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:57.509 [2024-10-30 12:25:30.188587] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:57.509 [2024-10-30 12:25:30.188890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:57.509 [2024-10-30 12:25:30.189559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:57.509 [2024-10-30 12:25:30.189842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:57.767 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:57.767 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:13:57.767 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:58.747 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:59.032 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:59.032 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:59.032 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:59.032 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:59.032 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:59.291 Malloc1 00:13:59.291 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:59.549 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:59.807 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:00.065 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:00.065 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:00.065 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:00.323 Malloc2 00:14:00.323 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:00.579 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:00.834 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 591223 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 591223 ']' 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 591223 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 591223 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 591223' 00:14:01.092 killing process with pid 591223 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 591223 00:14:01.092 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 591223 00:14:01.659 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:01.659 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:01.659 00:14:01.659 real 0m53.428s 00:14:01.659 user 3m26.734s 00:14:01.659 sys 0m3.810s 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:01.660 ************************************ 00:14:01.660 END TEST nvmf_vfio_user 00:14:01.660 ************************************ 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.660 ************************************ 00:14:01.660 START TEST nvmf_vfio_user_nvme_compliance 00:14:01.660 ************************************ 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:01.660 * Looking for test storage... 00:14:01.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:01.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.660 --rc genhtml_branch_coverage=1 00:14:01.660 --rc genhtml_function_coverage=1 00:14:01.660 --rc genhtml_legend=1 00:14:01.660 --rc geninfo_all_blocks=1 00:14:01.660 --rc geninfo_unexecuted_blocks=1 00:14:01.660 00:14:01.660 ' 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:01.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.660 --rc genhtml_branch_coverage=1 00:14:01.660 --rc genhtml_function_coverage=1 00:14:01.660 --rc genhtml_legend=1 00:14:01.660 --rc geninfo_all_blocks=1 00:14:01.660 --rc geninfo_unexecuted_blocks=1 00:14:01.660 00:14:01.660 ' 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:01.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.660 --rc genhtml_branch_coverage=1 00:14:01.660 --rc genhtml_function_coverage=1 00:14:01.660 --rc genhtml_legend=1 00:14:01.660 --rc geninfo_all_blocks=1 00:14:01.660 --rc geninfo_unexecuted_blocks=1 00:14:01.660 00:14:01.660 ' 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:01.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.660 --rc genhtml_branch_coverage=1 00:14:01.660 --rc genhtml_function_coverage=1 00:14:01.660 --rc genhtml_legend=1 00:14:01.660 --rc geninfo_all_blocks=1 00:14:01.660 --rc geninfo_unexecuted_blocks=1 00:14:01.660 00:14:01.660 ' 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.660 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=591832 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 591832' 00:14:01.661 Process pid: 591832 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 591832 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 591832 ']' 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.661 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:01.661 [2024-10-30 12:25:34.300462] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:14:01.661 [2024-10-30 12:25:34.300544] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.921 [2024-10-30 12:25:34.369370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:01.921 [2024-10-30 12:25:34.426740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.921 [2024-10-30 12:25:34.426807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.921 [2024-10-30 12:25:34.426820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.921 [2024-10-30 12:25:34.426831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.921 [2024-10-30 12:25:34.426840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.921 [2024-10-30 12:25:34.428198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.921 [2024-10-30 12:25:34.428277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.921 [2024-10-30 12:25:34.428296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.921 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:01.921 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:14:01.921 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.295 malloc0 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.295 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:03.295 00:14:03.295 00:14:03.295 CUnit - A unit testing framework for C - Version 2.1-3 00:14:03.295 http://cunit.sourceforge.net/ 00:14:03.295 00:14:03.295 00:14:03.295 Suite: nvme_compliance 00:14:03.295 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-30 12:25:35.802820] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.295 [2024-10-30 12:25:35.804233] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:03.295 [2024-10-30 12:25:35.804279] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:03.295 [2024-10-30 12:25:35.804293] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:03.295 [2024-10-30 12:25:35.805841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.295 passed 00:14:03.295 Test: admin_identify_ctrlr_verify_fused ...[2024-10-30 12:25:35.897435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.295 [2024-10-30 12:25:35.900454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.295 passed 00:14:03.553 Test: admin_identify_ns ...[2024-10-30 12:25:35.985790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.553 [2024-10-30 12:25:36.046279] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:03.553 [2024-10-30 12:25:36.054278] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:03.553 [2024-10-30 12:25:36.075389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.553 passed 00:14:03.553 Test: admin_get_features_mandatory_features ...[2024-10-30 12:25:36.159253] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.553 [2024-10-30 12:25:36.162277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.553 passed 00:14:03.811 Test: admin_get_features_optional_features ...[2024-10-30 12:25:36.253895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.811 [2024-10-30 12:25:36.256912] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.811 passed 00:14:03.811 Test: admin_set_features_number_of_queues ...[2024-10-30 12:25:36.341244] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.811 [2024-10-30 12:25:36.448375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.811 passed 00:14:04.069 Test: admin_get_log_page_mandatory_logs ...[2024-10-30 12:25:36.533390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.069 [2024-10-30 12:25:36.536416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.069 passed 00:14:04.069 Test: admin_get_log_page_with_lpo ...[2024-10-30 12:25:36.625632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.069 [2024-10-30 12:25:36.693274] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:04.069 [2024-10-30 12:25:36.706345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.069 passed 00:14:04.328 Test: fabric_property_get ...[2024-10-30 12:25:36.791846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.328 [2024-10-30 12:25:36.793115] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:04.328 [2024-10-30 12:25:36.794872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.328 passed 00:14:04.328 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-30 12:25:36.880437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.328 [2024-10-30 12:25:36.881762] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:04.328 [2024-10-30 12:25:36.883464] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.328 passed 00:14:04.328 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-30 12:25:36.971765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.586 [2024-10-30 12:25:37.054268] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:04.586 [2024-10-30 12:25:37.070278] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:04.586 [2024-10-30 12:25:37.075389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.586 passed 00:14:04.586 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-30 12:25:37.160878] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.586 [2024-10-30 12:25:37.162181] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:04.586 [2024-10-30 12:25:37.163900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.586 passed 00:14:04.586 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-30 12:25:37.251878] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.844 [2024-10-30 12:25:37.328270] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:04.844 [2024-10-30 12:25:37.352281] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:04.844 [2024-10-30 12:25:37.357365] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.844 passed 00:14:04.844 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-30 12:25:37.442938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.844 [2024-10-30 12:25:37.444316] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:04.844 [2024-10-30 12:25:37.444355] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:04.844 [2024-10-30 12:25:37.445972] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.844 passed 00:14:05.102 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-30 12:25:37.532282] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.102 [2024-10-30 12:25:37.626269] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:05.102 [2024-10-30 12:25:37.634279] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:05.102 [2024-10-30 12:25:37.642282] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:05.102 [2024-10-30 12:25:37.650264] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:05.102 [2024-10-30 12:25:37.679389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.102 passed 00:14:05.102 Test: admin_create_io_sq_verify_pc ...[2024-10-30 12:25:37.759995] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.102 [2024-10-30 12:25:37.779279] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:05.360 [2024-10-30 12:25:37.796377] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.360 passed 00:14:05.360 Test: admin_create_io_qp_max_qps ...[2024-10-30 12:25:37.882945] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.320 [2024-10-30 12:25:38.982273] nvme_ctrlr.c:5487:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:06.886 [2024-10-30 12:25:39.370328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.886 passed 00:14:06.886 Test: admin_create_io_sq_shared_cq ...[2024-10-30 12:25:39.456670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.152 [2024-10-30 12:25:39.588292] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:07.152 [2024-10-30 12:25:39.625359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.152 passed 00:14:07.152 00:14:07.152 Run Summary: Type Total Ran Passed Failed Inactive 00:14:07.152 suites 1 1 n/a 0 0 00:14:07.152 tests 18 18 18 0 0 00:14:07.152 asserts 360 360 360 0 n/a 00:14:07.152 00:14:07.153 Elapsed time = 1.589 seconds 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 591832 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 591832 ']' 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 591832 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 591832 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 591832' 00:14:07.153 killing process with pid 591832 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 591832 00:14:07.153 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 591832 00:14:07.413 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:07.413 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:07.413 00:14:07.413 real 0m5.883s 00:14:07.413 user 0m16.484s 00:14:07.413 sys 0m0.572s 00:14:07.413 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.413 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:07.413 ************************************ 00:14:07.413 END TEST nvmf_vfio_user_nvme_compliance 00:14:07.413 ************************************ 00:14:07.413 12:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:07.413 12:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:07.413 12:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.413 12:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:07.413 ************************************ 00:14:07.413 START TEST nvmf_vfio_user_fuzz 00:14:07.413 ************************************ 00:14:07.413 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:07.413 * Looking for test storage... 00:14:07.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.413 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:07.413 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:14:07.413 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.671 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:07.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.672 --rc genhtml_branch_coverage=1 00:14:07.672 --rc genhtml_function_coverage=1 00:14:07.672 --rc genhtml_legend=1 00:14:07.672 --rc geninfo_all_blocks=1 00:14:07.672 --rc geninfo_unexecuted_blocks=1 00:14:07.672 00:14:07.672 ' 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:07.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.672 --rc genhtml_branch_coverage=1 00:14:07.672 --rc genhtml_function_coverage=1 00:14:07.672 --rc genhtml_legend=1 00:14:07.672 --rc geninfo_all_blocks=1 00:14:07.672 --rc geninfo_unexecuted_blocks=1 00:14:07.672 00:14:07.672 ' 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:07.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.672 --rc genhtml_branch_coverage=1 00:14:07.672 --rc genhtml_function_coverage=1 00:14:07.672 --rc genhtml_legend=1 00:14:07.672 --rc geninfo_all_blocks=1 00:14:07.672 --rc geninfo_unexecuted_blocks=1 00:14:07.672 00:14:07.672 ' 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:07.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.672 --rc genhtml_branch_coverage=1 00:14:07.672 --rc genhtml_function_coverage=1 00:14:07.672 --rc genhtml_legend=1 00:14:07.672 --rc geninfo_all_blocks=1 00:14:07.672 --rc geninfo_unexecuted_blocks=1 00:14:07.672 00:14:07.672 ' 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:07.672 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=592568 00:14:07.673 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:07.673 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 592568' 00:14:07.673 Process pid: 592568 00:14:07.673 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:07.673 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 592568 00:14:07.673 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 592568 ']' 00:14:07.673 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.673 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:07.673 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.673 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:07.673 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:07.930 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.930 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:14:07.930 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.864 malloc0 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:08.864 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:40.929 Fuzzing completed. Shutting down the fuzz application 00:14:40.929 00:14:40.929 Dumping successful admin opcodes: 00:14:40.929 8, 9, 10, 24, 00:14:40.929 Dumping successful io opcodes: 00:14:40.929 0, 00:14:40.929 NS: 0x20000081ef00 I/O qp, Total commands completed: 659366, total successful commands: 2567, random_seed: 3518427072 00:14:40.929 NS: 0x20000081ef00 admin qp, Total commands completed: 84122, total successful commands: 668, random_seed: 684610560 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 592568 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 592568 ']' 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 592568 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 592568 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 592568' 00:14:40.929 killing process with pid 592568 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 592568 00:14:40.929 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 592568 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:40.929 00:14:40.929 real 0m32.222s 00:14:40.929 user 0m30.483s 00:14:40.929 sys 0m29.407s 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.929 ************************************ 00:14:40.929 END TEST nvmf_vfio_user_fuzz 00:14:40.929 ************************************ 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.929 ************************************ 00:14:40.929 START TEST nvmf_auth_target 00:14:40.929 ************************************ 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:40.929 * Looking for test storage... 00:14:40.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.929 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:40.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.930 --rc genhtml_branch_coverage=1 00:14:40.930 --rc genhtml_function_coverage=1 00:14:40.930 --rc genhtml_legend=1 00:14:40.930 --rc geninfo_all_blocks=1 00:14:40.930 --rc geninfo_unexecuted_blocks=1 00:14:40.930 00:14:40.930 ' 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:40.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.930 --rc genhtml_branch_coverage=1 00:14:40.930 --rc genhtml_function_coverage=1 00:14:40.930 --rc genhtml_legend=1 00:14:40.930 --rc geninfo_all_blocks=1 00:14:40.930 --rc geninfo_unexecuted_blocks=1 00:14:40.930 00:14:40.930 ' 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:40.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.930 --rc genhtml_branch_coverage=1 00:14:40.930 --rc genhtml_function_coverage=1 00:14:40.930 --rc genhtml_legend=1 00:14:40.930 --rc geninfo_all_blocks=1 00:14:40.930 --rc geninfo_unexecuted_blocks=1 00:14:40.930 00:14:40.930 ' 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:40.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.930 --rc genhtml_branch_coverage=1 00:14:40.930 --rc genhtml_function_coverage=1 00:14:40.930 --rc genhtml_legend=1 00:14:40.930 --rc geninfo_all_blocks=1 00:14:40.930 --rc geninfo_unexecuted_blocks=1 00:14:40.930 00:14:40.930 ' 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:40.930 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:40.931 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:42.311 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:42.311 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:42.311 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:42.312 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:42.312 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:42.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:14:42.312 00:14:42.312 --- 10.0.0.2 ping statistics --- 00:14:42.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.312 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:14:42.312 00:14:42.312 --- 10.0.0.1 ping statistics --- 00:14:42.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.312 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=598025 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 598025 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 598025 ']' 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:42.312 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.571 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:42.571 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:42.571 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:42.571 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:42.571 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.571 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.571 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=598050 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=de543abfff9c68656a345f449c0ba1dfd4f8ec89cafd0fdb 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mcl 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key de543abfff9c68656a345f449c0ba1dfd4f8ec89cafd0fdb 0 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 de543abfff9c68656a345f449c0ba1dfd4f8ec89cafd0fdb 0 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=de543abfff9c68656a345f449c0ba1dfd4f8ec89cafd0fdb 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:42.572 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mcl 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mcl 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.mcl 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4fb6287cf9af3def2a0707367366e7e7b4c0399035c54081cc6f3f07a5d58986 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ZbK 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4fb6287cf9af3def2a0707367366e7e7b4c0399035c54081cc6f3f07a5d58986 3 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4fb6287cf9af3def2a0707367366e7e7b4c0399035c54081cc6f3f07a5d58986 3 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4fb6287cf9af3def2a0707367366e7e7b4c0399035c54081cc6f3f07a5d58986 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ZbK 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ZbK 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ZbK 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b53f64433e5b32c252c63cbed51fa784 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xfO 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b53f64433e5b32c252c63cbed51fa784 1 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b53f64433e5b32c252c63cbed51fa784 1 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b53f64433e5b32c252c63cbed51fa784 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xfO 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xfO 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.xfO 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:42.831 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c9b0f53c2ba347058fb7a84570b581ea64aa97a8865d9a66 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bwb 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c9b0f53c2ba347058fb7a84570b581ea64aa97a8865d9a66 2 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c9b0f53c2ba347058fb7a84570b581ea64aa97a8865d9a66 2 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c9b0f53c2ba347058fb7a84570b581ea64aa97a8865d9a66 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bwb 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bwb 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Bwb 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9e98e4a88779bc599725e7a6ab59e14caff89c531f73ca74 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5VG 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9e98e4a88779bc599725e7a6ab59e14caff89c531f73ca74 2 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9e98e4a88779bc599725e7a6ab59e14caff89c531f73ca74 2 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9e98e4a88779bc599725e7a6ab59e14caff89c531f73ca74 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5VG 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5VG 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.5VG 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=292a691b5ebe454b74de31cb02ddb3ae 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dRp 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 292a691b5ebe454b74de31cb02ddb3ae 1 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 292a691b5ebe454b74de31cb02ddb3ae 1 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=292a691b5ebe454b74de31cb02ddb3ae 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:42.832 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dRp 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dRp 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.dRp 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ac37b693527d23e3667d70d894f697dddeba2db73a158d4b4971e34013bf6f30 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Y5L 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ac37b693527d23e3667d70d894f697dddeba2db73a158d4b4971e34013bf6f30 3 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ac37b693527d23e3667d70d894f697dddeba2db73a158d4b4971e34013bf6f30 3 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ac37b693527d23e3667d70d894f697dddeba2db73a158d4b4971e34013bf6f30 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Y5L 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Y5L 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Y5L 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 598025 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 598025 ']' 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:43.090 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.349 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:43.349 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:43.349 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 598050 /var/tmp/host.sock 00:14:43.349 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 598050 ']' 00:14:43.349 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:43.349 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:43.349 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:43.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:43.349 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:43.349 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mcl 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.mcl 00:14:43.607 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.mcl 00:14:43.864 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ZbK ]] 00:14:43.864 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZbK 00:14:43.864 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.865 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.865 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.865 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZbK 00:14:43.865 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZbK 00:14:44.122 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:44.122 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xfO 00:14:44.122 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.122 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.122 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.122 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.xfO 00:14:44.122 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.xfO 00:14:44.380 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Bwb ]] 00:14:44.380 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bwb 00:14:44.380 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.380 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.380 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.380 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bwb 00:14:44.380 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bwb 00:14:44.639 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:44.639 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5VG 00:14:44.639 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.639 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.639 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.639 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5VG 00:14:44.639 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5VG 00:14:44.898 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.dRp ]] 00:14:44.898 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dRp 00:14:44.898 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.898 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.898 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.898 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dRp 00:14:44.898 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dRp 00:14:45.156 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:45.156 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Y5L 00:14:45.156 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.156 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.156 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.156 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Y5L 00:14:45.156 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Y5L 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.722 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.296 00:14:46.296 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.296 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.296 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.554 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.554 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.554 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.554 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.554 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.554 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.554 { 00:14:46.554 "cntlid": 1, 00:14:46.554 "qid": 0, 00:14:46.554 "state": "enabled", 00:14:46.554 "thread": "nvmf_tgt_poll_group_000", 00:14:46.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:46.554 "listen_address": { 00:14:46.554 "trtype": "TCP", 00:14:46.554 "adrfam": "IPv4", 00:14:46.554 "traddr": "10.0.0.2", 00:14:46.554 "trsvcid": "4420" 00:14:46.554 }, 00:14:46.554 "peer_address": { 00:14:46.554 "trtype": "TCP", 00:14:46.554 "adrfam": "IPv4", 00:14:46.554 "traddr": "10.0.0.1", 00:14:46.554 "trsvcid": "40804" 00:14:46.554 }, 00:14:46.554 "auth": { 00:14:46.554 "state": "completed", 00:14:46.554 "digest": "sha256", 00:14:46.554 "dhgroup": "null" 00:14:46.554 } 00:14:46.554 } 00:14:46.554 ]' 00:14:46.554 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.554 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.554 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.554 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:46.554 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.554 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.554 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.554 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.811 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:14:46.812 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:14:47.743 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.743 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.743 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.743 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.743 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.743 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.743 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:47.743 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.000 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.258 00:14:48.258 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.258 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.258 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.515 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.515 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.516 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.516 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.773 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.773 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.773 { 00:14:48.773 "cntlid": 3, 00:14:48.773 "qid": 0, 00:14:48.773 "state": "enabled", 00:14:48.773 "thread": "nvmf_tgt_poll_group_000", 00:14:48.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:48.773 "listen_address": { 00:14:48.773 "trtype": "TCP", 00:14:48.773 "adrfam": "IPv4", 00:14:48.773 "traddr": "10.0.0.2", 00:14:48.773 "trsvcid": "4420" 00:14:48.773 }, 00:14:48.773 "peer_address": { 00:14:48.773 "trtype": "TCP", 00:14:48.773 "adrfam": "IPv4", 00:14:48.773 "traddr": "10.0.0.1", 00:14:48.773 "trsvcid": "40832" 00:14:48.773 }, 00:14:48.773 "auth": { 00:14:48.773 "state": "completed", 00:14:48.773 "digest": "sha256", 00:14:48.773 "dhgroup": "null" 00:14:48.773 } 00:14:48.773 } 00:14:48.773 ]' 00:14:48.773 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.773 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.773 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.773 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:48.773 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.773 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.773 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.773 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.031 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:14:49.031 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:14:49.964 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.964 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.964 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.964 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.964 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.964 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.964 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:49.964 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.223 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.481 00:14:50.481 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.481 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.482 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.739 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.739 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.739 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.739 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.739 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.739 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.739 { 00:14:50.739 "cntlid": 5, 00:14:50.739 "qid": 0, 00:14:50.739 "state": "enabled", 00:14:50.739 "thread": "nvmf_tgt_poll_group_000", 00:14:50.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:50.739 "listen_address": { 00:14:50.739 "trtype": "TCP", 00:14:50.739 "adrfam": "IPv4", 00:14:50.739 "traddr": "10.0.0.2", 00:14:50.739 "trsvcid": "4420" 00:14:50.739 }, 00:14:50.740 "peer_address": { 00:14:50.740 "trtype": "TCP", 00:14:50.740 "adrfam": "IPv4", 00:14:50.740 "traddr": "10.0.0.1", 00:14:50.740 "trsvcid": "40870" 00:14:50.740 }, 00:14:50.740 "auth": { 00:14:50.740 "state": "completed", 00:14:50.740 "digest": "sha256", 00:14:50.740 "dhgroup": "null" 00:14:50.740 } 00:14:50.740 } 00:14:50.740 ]' 00:14:50.740 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.740 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.740 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.998 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:50.998 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.998 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.998 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.998 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.256 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:14:51.256 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:14:52.191 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.192 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.192 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.192 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.192 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.192 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.192 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:52.192 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.450 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.709 00:14:52.709 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.709 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.709 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.967 { 00:14:52.967 "cntlid": 7, 00:14:52.967 "qid": 0, 00:14:52.967 "state": "enabled", 00:14:52.967 "thread": "nvmf_tgt_poll_group_000", 00:14:52.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:52.967 "listen_address": { 00:14:52.967 "trtype": "TCP", 00:14:52.967 "adrfam": "IPv4", 00:14:52.967 "traddr": "10.0.0.2", 00:14:52.967 "trsvcid": "4420" 00:14:52.967 }, 00:14:52.967 "peer_address": { 00:14:52.967 "trtype": "TCP", 00:14:52.967 "adrfam": "IPv4", 00:14:52.967 "traddr": "10.0.0.1", 00:14:52.967 "trsvcid": "34246" 00:14:52.967 }, 00:14:52.967 "auth": { 00:14:52.967 "state": "completed", 00:14:52.967 "digest": "sha256", 00:14:52.967 "dhgroup": "null" 00:14:52.967 } 00:14:52.967 } 00:14:52.967 ]' 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:52.967 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.226 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.226 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.226 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.485 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:14:53.485 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:14:54.419 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.419 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.419 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.419 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.419 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.419 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.419 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.419 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:54.419 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.677 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.935 00:14:54.935 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.935 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.935 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.193 { 00:14:55.193 "cntlid": 9, 00:14:55.193 "qid": 0, 00:14:55.193 "state": "enabled", 00:14:55.193 "thread": "nvmf_tgt_poll_group_000", 00:14:55.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:55.193 "listen_address": { 00:14:55.193 "trtype": "TCP", 00:14:55.193 "adrfam": "IPv4", 00:14:55.193 "traddr": "10.0.0.2", 00:14:55.193 "trsvcid": "4420" 00:14:55.193 }, 00:14:55.193 "peer_address": { 00:14:55.193 "trtype": "TCP", 00:14:55.193 "adrfam": "IPv4", 00:14:55.193 "traddr": "10.0.0.1", 00:14:55.193 "trsvcid": "34270" 00:14:55.193 }, 00:14:55.193 "auth": { 00:14:55.193 "state": "completed", 00:14:55.193 "digest": "sha256", 00:14:55.193 "dhgroup": "ffdhe2048" 00:14:55.193 } 00:14:55.193 } 00:14:55.193 ]' 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:55.193 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.452 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.452 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.452 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.710 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:14:55.710 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:14:56.643 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.643 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.643 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.643 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.643 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.643 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.643 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:56.643 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.900 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.157 00:14:57.157 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.157 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.157 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.414 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.414 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.414 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.414 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.414 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.414 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.414 { 00:14:57.414 "cntlid": 11, 00:14:57.414 "qid": 0, 00:14:57.414 "state": "enabled", 00:14:57.414 "thread": "nvmf_tgt_poll_group_000", 00:14:57.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:57.414 "listen_address": { 00:14:57.414 "trtype": "TCP", 00:14:57.414 "adrfam": "IPv4", 00:14:57.414 "traddr": "10.0.0.2", 00:14:57.414 "trsvcid": "4420" 00:14:57.414 }, 00:14:57.414 "peer_address": { 00:14:57.414 "trtype": "TCP", 00:14:57.414 "adrfam": "IPv4", 00:14:57.414 "traddr": "10.0.0.1", 00:14:57.414 "trsvcid": "34286" 00:14:57.414 }, 00:14:57.414 "auth": { 00:14:57.414 "state": "completed", 00:14:57.414 "digest": "sha256", 00:14:57.414 "dhgroup": "ffdhe2048" 00:14:57.414 } 00:14:57.414 } 00:14:57.414 ]' 00:14:57.414 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.414 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.414 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.414 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:57.414 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.671 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.671 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.671 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.928 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:14:57.928 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:14:58.861 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.861 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.861 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.861 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.861 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.861 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.861 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:58.861 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.120 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.379 00:14:59.379 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.379 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.379 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.637 { 00:14:59.637 "cntlid": 13, 00:14:59.637 "qid": 0, 00:14:59.637 "state": "enabled", 00:14:59.637 "thread": "nvmf_tgt_poll_group_000", 00:14:59.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:59.637 "listen_address": { 00:14:59.637 "trtype": "TCP", 00:14:59.637 "adrfam": "IPv4", 00:14:59.637 "traddr": "10.0.0.2", 00:14:59.637 "trsvcid": "4420" 00:14:59.637 }, 00:14:59.637 "peer_address": { 00:14:59.637 "trtype": "TCP", 00:14:59.637 "adrfam": "IPv4", 00:14:59.637 "traddr": "10.0.0.1", 00:14:59.637 "trsvcid": "34316" 00:14:59.637 }, 00:14:59.637 "auth": { 00:14:59.637 "state": "completed", 00:14:59.637 "digest": "sha256", 00:14:59.637 "dhgroup": "ffdhe2048" 00:14:59.637 } 00:14:59.637 } 00:14:59.637 ]' 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.637 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.203 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:00.203 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.139 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.706 00:15:01.706 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.706 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.706 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.706 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.706 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.706 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.706 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.965 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.965 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.965 { 00:15:01.965 "cntlid": 15, 00:15:01.965 "qid": 0, 00:15:01.965 "state": "enabled", 00:15:01.965 "thread": "nvmf_tgt_poll_group_000", 00:15:01.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:01.965 "listen_address": { 00:15:01.965 "trtype": "TCP", 00:15:01.965 "adrfam": "IPv4", 00:15:01.965 "traddr": "10.0.0.2", 00:15:01.965 "trsvcid": "4420" 00:15:01.965 }, 00:15:01.965 "peer_address": { 00:15:01.965 "trtype": "TCP", 00:15:01.965 "adrfam": "IPv4", 00:15:01.965 "traddr": "10.0.0.1", 00:15:01.965 "trsvcid": "34348" 00:15:01.965 }, 00:15:01.965 "auth": { 00:15:01.965 "state": "completed", 00:15:01.965 "digest": "sha256", 00:15:01.965 "dhgroup": "ffdhe2048" 00:15:01.965 } 00:15:01.965 } 00:15:01.965 ]' 00:15:01.965 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.965 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.965 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.965 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:01.965 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.965 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.965 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.965 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.223 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:02.223 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:03.158 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.158 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.158 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.158 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.158 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.158 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.158 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.158 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:03.158 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.416 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.674 00:15:03.674 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.674 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.674 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.933 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.933 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.933 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.933 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.933 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.933 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.933 { 00:15:03.933 "cntlid": 17, 00:15:03.933 "qid": 0, 00:15:03.933 "state": "enabled", 00:15:03.933 "thread": "nvmf_tgt_poll_group_000", 00:15:03.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:03.933 "listen_address": { 00:15:03.933 "trtype": "TCP", 00:15:03.933 "adrfam": "IPv4", 00:15:03.933 "traddr": "10.0.0.2", 00:15:03.933 "trsvcid": "4420" 00:15:03.933 }, 00:15:03.933 "peer_address": { 00:15:03.933 "trtype": "TCP", 00:15:03.933 "adrfam": "IPv4", 00:15:03.933 "traddr": "10.0.0.1", 00:15:03.933 "trsvcid": "51002" 00:15:03.933 }, 00:15:03.933 "auth": { 00:15:03.933 "state": "completed", 00:15:03.933 "digest": "sha256", 00:15:03.933 "dhgroup": "ffdhe3072" 00:15:03.933 } 00:15:03.933 } 00:15:03.933 ]' 00:15:03.933 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.192 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.192 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.192 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.192 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.192 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.192 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.192 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.450 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:04.450 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:05.383 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.383 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.383 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.383 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.383 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.383 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.383 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:05.383 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.646 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.903 00:15:05.903 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.903 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.903 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.161 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.161 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.161 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.161 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.161 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.161 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.161 { 00:15:06.161 "cntlid": 19, 00:15:06.161 "qid": 0, 00:15:06.161 "state": "enabled", 00:15:06.161 "thread": "nvmf_tgt_poll_group_000", 00:15:06.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:06.161 "listen_address": { 00:15:06.161 "trtype": "TCP", 00:15:06.161 "adrfam": "IPv4", 00:15:06.161 "traddr": "10.0.0.2", 00:15:06.161 "trsvcid": "4420" 00:15:06.161 }, 00:15:06.161 "peer_address": { 00:15:06.161 "trtype": "TCP", 00:15:06.161 "adrfam": "IPv4", 00:15:06.161 "traddr": "10.0.0.1", 00:15:06.161 "trsvcid": "51030" 00:15:06.161 }, 00:15:06.161 "auth": { 00:15:06.161 "state": "completed", 00:15:06.161 "digest": "sha256", 00:15:06.161 "dhgroup": "ffdhe3072" 00:15:06.161 } 00:15:06.161 } 00:15:06.161 ]' 00:15:06.161 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.419 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.419 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.419 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.419 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.419 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.419 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.419 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.678 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:06.678 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:07.612 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.612 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.612 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.612 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.612 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.612 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.612 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:07.612 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.870 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.128 00:15:08.128 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.128 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.128 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.424 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.424 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.424 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.424 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.424 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.424 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.424 { 00:15:08.424 "cntlid": 21, 00:15:08.424 "qid": 0, 00:15:08.424 "state": "enabled", 00:15:08.424 "thread": "nvmf_tgt_poll_group_000", 00:15:08.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:08.424 "listen_address": { 00:15:08.424 "trtype": "TCP", 00:15:08.424 "adrfam": "IPv4", 00:15:08.424 "traddr": "10.0.0.2", 00:15:08.424 "trsvcid": "4420" 00:15:08.424 }, 00:15:08.424 "peer_address": { 00:15:08.424 "trtype": "TCP", 00:15:08.424 "adrfam": "IPv4", 00:15:08.424 "traddr": "10.0.0.1", 00:15:08.424 "trsvcid": "51072" 00:15:08.424 }, 00:15:08.424 "auth": { 00:15:08.424 "state": "completed", 00:15:08.424 "digest": "sha256", 00:15:08.424 "dhgroup": "ffdhe3072" 00:15:08.424 } 00:15:08.424 } 00:15:08.424 ]' 00:15:08.424 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.424 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.424 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.424 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:08.424 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.682 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.682 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.682 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.940 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:08.940 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:09.874 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.875 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.441 00:15:10.441 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.441 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.441 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.699 { 00:15:10.699 "cntlid": 23, 00:15:10.699 "qid": 0, 00:15:10.699 "state": "enabled", 00:15:10.699 "thread": "nvmf_tgt_poll_group_000", 00:15:10.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:10.699 "listen_address": { 00:15:10.699 "trtype": "TCP", 00:15:10.699 "adrfam": "IPv4", 00:15:10.699 "traddr": "10.0.0.2", 00:15:10.699 "trsvcid": "4420" 00:15:10.699 }, 00:15:10.699 "peer_address": { 00:15:10.699 "trtype": "TCP", 00:15:10.699 "adrfam": "IPv4", 00:15:10.699 "traddr": "10.0.0.1", 00:15:10.699 "trsvcid": "51104" 00:15:10.699 }, 00:15:10.699 "auth": { 00:15:10.699 "state": "completed", 00:15:10.699 "digest": "sha256", 00:15:10.699 "dhgroup": "ffdhe3072" 00:15:10.699 } 00:15:10.699 } 00:15:10.699 ]' 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.958 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:10.958 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:11.955 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.955 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.955 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.955 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.955 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.955 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:11.955 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.955 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:11.955 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.231 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.490 00:15:12.490 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.490 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.490 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.747 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.748 { 00:15:12.748 "cntlid": 25, 00:15:12.748 "qid": 0, 00:15:12.748 "state": "enabled", 00:15:12.748 "thread": "nvmf_tgt_poll_group_000", 00:15:12.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:12.748 "listen_address": { 00:15:12.748 "trtype": "TCP", 00:15:12.748 "adrfam": "IPv4", 00:15:12.748 "traddr": "10.0.0.2", 00:15:12.748 "trsvcid": "4420" 00:15:12.748 }, 00:15:12.748 "peer_address": { 00:15:12.748 "trtype": "TCP", 00:15:12.748 "adrfam": "IPv4", 00:15:12.748 "traddr": "10.0.0.1", 00:15:12.748 "trsvcid": "51132" 00:15:12.748 }, 00:15:12.748 "auth": { 00:15:12.748 "state": "completed", 00:15:12.748 "digest": "sha256", 00:15:12.748 "dhgroup": "ffdhe4096" 00:15:12.748 } 00:15:12.748 } 00:15:12.748 ]' 00:15:12.748 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.006 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.006 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.006 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:13.006 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.006 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.006 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.006 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.265 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:13.265 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:14.199 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.199 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.199 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.199 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.199 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.199 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.199 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:14.199 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:14.457 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.457 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.024 00:15:15.024 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.024 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.024 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.024 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.024 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.024 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.024 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.024 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.024 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.024 { 00:15:15.024 "cntlid": 27, 00:15:15.024 "qid": 0, 00:15:15.024 "state": "enabled", 00:15:15.024 "thread": "nvmf_tgt_poll_group_000", 00:15:15.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:15.024 "listen_address": { 00:15:15.024 "trtype": "TCP", 00:15:15.024 "adrfam": "IPv4", 00:15:15.024 "traddr": "10.0.0.2", 00:15:15.024 "trsvcid": "4420" 00:15:15.024 }, 00:15:15.024 "peer_address": { 00:15:15.024 "trtype": "TCP", 00:15:15.024 "adrfam": "IPv4", 00:15:15.024 "traddr": "10.0.0.1", 00:15:15.024 "trsvcid": "48124" 00:15:15.024 }, 00:15:15.024 "auth": { 00:15:15.024 "state": "completed", 00:15:15.024 "digest": "sha256", 00:15:15.024 "dhgroup": "ffdhe4096" 00:15:15.024 } 00:15:15.024 } 00:15:15.024 ]' 00:15:15.024 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.281 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.281 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.281 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:15.281 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.281 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.281 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.281 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.539 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:15.539 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:16.473 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.473 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.473 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.473 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.473 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.473 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.473 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.473 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.731 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.989 00:15:16.989 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.989 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.989 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.554 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.554 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.554 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.554 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.554 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.554 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.554 { 00:15:17.554 "cntlid": 29, 00:15:17.554 "qid": 0, 00:15:17.554 "state": "enabled", 00:15:17.554 "thread": "nvmf_tgt_poll_group_000", 00:15:17.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:17.554 "listen_address": { 00:15:17.554 "trtype": "TCP", 00:15:17.554 "adrfam": "IPv4", 00:15:17.554 "traddr": "10.0.0.2", 00:15:17.554 "trsvcid": "4420" 00:15:17.554 }, 00:15:17.554 "peer_address": { 00:15:17.554 "trtype": "TCP", 00:15:17.554 "adrfam": "IPv4", 00:15:17.554 "traddr": "10.0.0.1", 00:15:17.554 "trsvcid": "48152" 00:15:17.554 }, 00:15:17.554 "auth": { 00:15:17.554 "state": "completed", 00:15:17.554 "digest": "sha256", 00:15:17.554 "dhgroup": "ffdhe4096" 00:15:17.554 } 00:15:17.554 } 00:15:17.554 ]' 00:15:17.554 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.554 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.554 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.554 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.554 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.554 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.554 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.554 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.812 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:17.812 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:18.747 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.747 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.747 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.747 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.747 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.747 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.747 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:18.747 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.005 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.263 00:15:19.263 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.263 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.263 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.829 { 00:15:19.829 "cntlid": 31, 00:15:19.829 "qid": 0, 00:15:19.829 "state": "enabled", 00:15:19.829 "thread": "nvmf_tgt_poll_group_000", 00:15:19.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:19.829 "listen_address": { 00:15:19.829 "trtype": "TCP", 00:15:19.829 "adrfam": "IPv4", 00:15:19.829 "traddr": "10.0.0.2", 00:15:19.829 "trsvcid": "4420" 00:15:19.829 }, 00:15:19.829 "peer_address": { 00:15:19.829 "trtype": "TCP", 00:15:19.829 "adrfam": "IPv4", 00:15:19.829 "traddr": "10.0.0.1", 00:15:19.829 "trsvcid": "48172" 00:15:19.829 }, 00:15:19.829 "auth": { 00:15:19.829 "state": "completed", 00:15:19.829 "digest": "sha256", 00:15:19.829 "dhgroup": "ffdhe4096" 00:15:19.829 } 00:15:19.829 } 00:15:19.829 ]' 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.829 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.087 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:20.087 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:21.020 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.020 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.020 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.020 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.020 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.020 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.020 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.020 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.020 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.277 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.278 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.843 00:15:21.844 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.844 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.844 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.101 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.102 { 00:15:22.102 "cntlid": 33, 00:15:22.102 "qid": 0, 00:15:22.102 "state": "enabled", 00:15:22.102 "thread": "nvmf_tgt_poll_group_000", 00:15:22.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:22.102 "listen_address": { 00:15:22.102 "trtype": "TCP", 00:15:22.102 "adrfam": "IPv4", 00:15:22.102 "traddr": "10.0.0.2", 00:15:22.102 "trsvcid": "4420" 00:15:22.102 }, 00:15:22.102 "peer_address": { 00:15:22.102 "trtype": "TCP", 00:15:22.102 "adrfam": "IPv4", 00:15:22.102 "traddr": "10.0.0.1", 00:15:22.102 "trsvcid": "48210" 00:15:22.102 }, 00:15:22.102 "auth": { 00:15:22.102 "state": "completed", 00:15:22.102 "digest": "sha256", 00:15:22.102 "dhgroup": "ffdhe6144" 00:15:22.102 } 00:15:22.102 } 00:15:22.102 ]' 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.102 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.669 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:22.669 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:23.235 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.493 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.493 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.493 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.493 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.493 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.493 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:23.493 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.751 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.317 00:15:24.317 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.317 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.317 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.575 { 00:15:24.575 "cntlid": 35, 00:15:24.575 "qid": 0, 00:15:24.575 "state": "enabled", 00:15:24.575 "thread": "nvmf_tgt_poll_group_000", 00:15:24.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:24.575 "listen_address": { 00:15:24.575 "trtype": "TCP", 00:15:24.575 "adrfam": "IPv4", 00:15:24.575 "traddr": "10.0.0.2", 00:15:24.575 "trsvcid": "4420" 00:15:24.575 }, 00:15:24.575 "peer_address": { 00:15:24.575 "trtype": "TCP", 00:15:24.575 "adrfam": "IPv4", 00:15:24.575 "traddr": "10.0.0.1", 00:15:24.575 "trsvcid": "47902" 00:15:24.575 }, 00:15:24.575 "auth": { 00:15:24.575 "state": "completed", 00:15:24.575 "digest": "sha256", 00:15:24.575 "dhgroup": "ffdhe6144" 00:15:24.575 } 00:15:24.575 } 00:15:24.575 ]' 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.575 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.833 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:24.833 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:25.766 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.766 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.766 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.766 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.766 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.766 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.766 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:25.766 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:26.024 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:26.024 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.024 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.024 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:26.024 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:26.024 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.024 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.024 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.024 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.282 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.282 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.282 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.282 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.847 00:15:26.847 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.847 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.847 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.105 { 00:15:27.105 "cntlid": 37, 00:15:27.105 "qid": 0, 00:15:27.105 "state": "enabled", 00:15:27.105 "thread": "nvmf_tgt_poll_group_000", 00:15:27.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:27.105 "listen_address": { 00:15:27.105 "trtype": "TCP", 00:15:27.105 "adrfam": "IPv4", 00:15:27.105 "traddr": "10.0.0.2", 00:15:27.105 "trsvcid": "4420" 00:15:27.105 }, 00:15:27.105 "peer_address": { 00:15:27.105 "trtype": "TCP", 00:15:27.105 "adrfam": "IPv4", 00:15:27.105 "traddr": "10.0.0.1", 00:15:27.105 "trsvcid": "47926" 00:15:27.105 }, 00:15:27.105 "auth": { 00:15:27.105 "state": "completed", 00:15:27.105 "digest": "sha256", 00:15:27.105 "dhgroup": "ffdhe6144" 00:15:27.105 } 00:15:27.105 } 00:15:27.105 ]' 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.105 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.362 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:27.363 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:28.295 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.295 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.295 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.295 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.295 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.295 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.295 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.295 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.554 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.120 00:15:29.120 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.120 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.120 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.378 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.378 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.378 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.378 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.378 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.378 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.378 { 00:15:29.378 "cntlid": 39, 00:15:29.378 "qid": 0, 00:15:29.378 "state": "enabled", 00:15:29.378 "thread": "nvmf_tgt_poll_group_000", 00:15:29.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:29.378 "listen_address": { 00:15:29.378 "trtype": "TCP", 00:15:29.378 "adrfam": "IPv4", 00:15:29.378 "traddr": "10.0.0.2", 00:15:29.378 "trsvcid": "4420" 00:15:29.378 }, 00:15:29.378 "peer_address": { 00:15:29.378 "trtype": "TCP", 00:15:29.378 "adrfam": "IPv4", 00:15:29.378 "traddr": "10.0.0.1", 00:15:29.378 "trsvcid": "47952" 00:15:29.378 }, 00:15:29.378 "auth": { 00:15:29.378 "state": "completed", 00:15:29.378 "digest": "sha256", 00:15:29.378 "dhgroup": "ffdhe6144" 00:15:29.378 } 00:15:29.378 } 00:15:29.378 ]' 00:15:29.378 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.635 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.635 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.635 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.635 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.635 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.635 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.635 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.892 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:29.892 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:30.826 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.826 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.826 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.826 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.826 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.826 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.826 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.826 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.826 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.083 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.015 00:15:32.015 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.015 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.015 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.273 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.273 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.273 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.273 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.273 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.273 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.273 { 00:15:32.273 "cntlid": 41, 00:15:32.273 "qid": 0, 00:15:32.273 "state": "enabled", 00:15:32.273 "thread": "nvmf_tgt_poll_group_000", 00:15:32.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:32.273 "listen_address": { 00:15:32.273 "trtype": "TCP", 00:15:32.273 "adrfam": "IPv4", 00:15:32.273 "traddr": "10.0.0.2", 00:15:32.273 "trsvcid": "4420" 00:15:32.273 }, 00:15:32.273 "peer_address": { 00:15:32.273 "trtype": "TCP", 00:15:32.273 "adrfam": "IPv4", 00:15:32.273 "traddr": "10.0.0.1", 00:15:32.273 "trsvcid": "47978" 00:15:32.273 }, 00:15:32.273 "auth": { 00:15:32.273 "state": "completed", 00:15:32.273 "digest": "sha256", 00:15:32.273 "dhgroup": "ffdhe8192" 00:15:32.273 } 00:15:32.273 } 00:15:32.273 ]' 00:15:32.273 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.273 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.274 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.274 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:32.274 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.274 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.274 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.274 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.531 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:32.531 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:33.466 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.466 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.466 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.466 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.466 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.466 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.466 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:33.466 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.724 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.659 00:15:34.659 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.659 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.659 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.922 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.922 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.922 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.922 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.922 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.922 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.922 { 00:15:34.922 "cntlid": 43, 00:15:34.922 "qid": 0, 00:15:34.922 "state": "enabled", 00:15:34.922 "thread": "nvmf_tgt_poll_group_000", 00:15:34.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:34.922 "listen_address": { 00:15:34.922 "trtype": "TCP", 00:15:34.922 "adrfam": "IPv4", 00:15:34.922 "traddr": "10.0.0.2", 00:15:34.922 "trsvcid": "4420" 00:15:34.922 }, 00:15:34.922 "peer_address": { 00:15:34.922 "trtype": "TCP", 00:15:34.922 "adrfam": "IPv4", 00:15:34.922 "traddr": "10.0.0.1", 00:15:34.922 "trsvcid": "51394" 00:15:34.922 }, 00:15:34.922 "auth": { 00:15:34.922 "state": "completed", 00:15:34.922 "digest": "sha256", 00:15:34.922 "dhgroup": "ffdhe8192" 00:15:34.922 } 00:15:34.922 } 00:15:34.922 ]' 00:15:34.922 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.922 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.922 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.253 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:35.253 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.253 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.253 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.253 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.511 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:35.511 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:36.445 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.446 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.446 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.446 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.446 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.446 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.446 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.446 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.703 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:36.703 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.703 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.703 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:36.703 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.703 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.703 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.703 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.704 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.704 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.704 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.704 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.704 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.267 00:15:37.524 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.524 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.524 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.782 { 00:15:37.782 "cntlid": 45, 00:15:37.782 "qid": 0, 00:15:37.782 "state": "enabled", 00:15:37.782 "thread": "nvmf_tgt_poll_group_000", 00:15:37.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:37.782 "listen_address": { 00:15:37.782 "trtype": "TCP", 00:15:37.782 "adrfam": "IPv4", 00:15:37.782 "traddr": "10.0.0.2", 00:15:37.782 "trsvcid": "4420" 00:15:37.782 }, 00:15:37.782 "peer_address": { 00:15:37.782 "trtype": "TCP", 00:15:37.782 "adrfam": "IPv4", 00:15:37.782 "traddr": "10.0.0.1", 00:15:37.782 "trsvcid": "51420" 00:15:37.782 }, 00:15:37.782 "auth": { 00:15:37.782 "state": "completed", 00:15:37.782 "digest": "sha256", 00:15:37.782 "dhgroup": "ffdhe8192" 00:15:37.782 } 00:15:37.782 } 00:15:37.782 ]' 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.782 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.039 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:38.039 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:38.971 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.971 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.971 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.971 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.971 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.971 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.971 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.971 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.229 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.162 00:15:40.162 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.162 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.162 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.420 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.420 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.420 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.420 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.420 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.420 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.420 { 00:15:40.420 "cntlid": 47, 00:15:40.420 "qid": 0, 00:15:40.420 "state": "enabled", 00:15:40.420 "thread": "nvmf_tgt_poll_group_000", 00:15:40.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:40.420 "listen_address": { 00:15:40.420 "trtype": "TCP", 00:15:40.420 "adrfam": "IPv4", 00:15:40.420 "traddr": "10.0.0.2", 00:15:40.420 "trsvcid": "4420" 00:15:40.420 }, 00:15:40.420 "peer_address": { 00:15:40.420 "trtype": "TCP", 00:15:40.420 "adrfam": "IPv4", 00:15:40.420 "traddr": "10.0.0.1", 00:15:40.420 "trsvcid": "51448" 00:15:40.420 }, 00:15:40.420 "auth": { 00:15:40.420 "state": "completed", 00:15:40.420 "digest": "sha256", 00:15:40.420 "dhgroup": "ffdhe8192" 00:15:40.420 } 00:15:40.420 } 00:15:40.420 ]' 00:15:40.420 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.420 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.420 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.420 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.420 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.420 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.420 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.420 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.984 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:40.984 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:41.918 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.918 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.918 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.918 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.918 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.918 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:41.918 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.918 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.918 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:41.918 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.176 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.434 00:15:42.434 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.434 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.434 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.691 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.691 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.691 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.691 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.691 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.691 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.691 { 00:15:42.691 "cntlid": 49, 00:15:42.691 "qid": 0, 00:15:42.691 "state": "enabled", 00:15:42.691 "thread": "nvmf_tgt_poll_group_000", 00:15:42.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:42.691 "listen_address": { 00:15:42.691 "trtype": "TCP", 00:15:42.691 "adrfam": "IPv4", 00:15:42.691 "traddr": "10.0.0.2", 00:15:42.691 "trsvcid": "4420" 00:15:42.691 }, 00:15:42.691 "peer_address": { 00:15:42.691 "trtype": "TCP", 00:15:42.691 "adrfam": "IPv4", 00:15:42.691 "traddr": "10.0.0.1", 00:15:42.691 "trsvcid": "51480" 00:15:42.691 }, 00:15:42.691 "auth": { 00:15:42.691 "state": "completed", 00:15:42.691 "digest": "sha384", 00:15:42.691 "dhgroup": "null" 00:15:42.691 } 00:15:42.691 } 00:15:42.691 ]' 00:15:42.691 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.691 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.691 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.692 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:42.692 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.949 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.949 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.949 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.207 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:43.207 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:44.140 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.140 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.140 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.140 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.140 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.140 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.140 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:44.140 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:44.397 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.398 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.655 00:15:44.655 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.655 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.655 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.914 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.914 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.914 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.914 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.914 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.914 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.914 { 00:15:44.914 "cntlid": 51, 00:15:44.914 "qid": 0, 00:15:44.914 "state": "enabled", 00:15:44.914 "thread": "nvmf_tgt_poll_group_000", 00:15:44.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:44.914 "listen_address": { 00:15:44.914 "trtype": "TCP", 00:15:44.914 "adrfam": "IPv4", 00:15:44.914 "traddr": "10.0.0.2", 00:15:44.914 "trsvcid": "4420" 00:15:44.914 }, 00:15:44.914 "peer_address": { 00:15:44.914 "trtype": "TCP", 00:15:44.914 "adrfam": "IPv4", 00:15:44.914 "traddr": "10.0.0.1", 00:15:44.914 "trsvcid": "55836" 00:15:44.914 }, 00:15:44.914 "auth": { 00:15:44.914 "state": "completed", 00:15:44.914 "digest": "sha384", 00:15:44.914 "dhgroup": "null" 00:15:44.914 } 00:15:44.914 } 00:15:44.914 ]' 00:15:44.914 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.914 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.914 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.171 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:45.171 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.171 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.171 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.171 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.429 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:45.429 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:46.362 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.362 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.362 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.362 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.362 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.362 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.362 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:46.362 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.620 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.877 00:15:46.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.877 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.134 { 00:15:47.134 "cntlid": 53, 00:15:47.134 "qid": 0, 00:15:47.134 "state": "enabled", 00:15:47.134 "thread": "nvmf_tgt_poll_group_000", 00:15:47.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:47.134 "listen_address": { 00:15:47.134 "trtype": "TCP", 00:15:47.134 "adrfam": "IPv4", 00:15:47.134 "traddr": "10.0.0.2", 00:15:47.134 "trsvcid": "4420" 00:15:47.134 }, 00:15:47.134 "peer_address": { 00:15:47.134 "trtype": "TCP", 00:15:47.134 "adrfam": "IPv4", 00:15:47.134 "traddr": "10.0.0.1", 00:15:47.134 "trsvcid": "55872" 00:15:47.134 }, 00:15:47.134 "auth": { 00:15:47.134 "state": "completed", 00:15:47.134 "digest": "sha384", 00:15:47.134 "dhgroup": "null" 00:15:47.134 } 00:15:47.134 } 00:15:47.134 ]' 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.134 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.391 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:47.391 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:48.322 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.322 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.322 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.322 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.322 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.322 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.322 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.322 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.611 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.176 00:15:49.176 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.176 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.176 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.176 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.176 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.176 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.176 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.176 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.176 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.176 { 00:15:49.176 "cntlid": 55, 00:15:49.176 "qid": 0, 00:15:49.176 "state": "enabled", 00:15:49.176 "thread": "nvmf_tgt_poll_group_000", 00:15:49.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:49.176 "listen_address": { 00:15:49.176 "trtype": "TCP", 00:15:49.176 "adrfam": "IPv4", 00:15:49.176 "traddr": "10.0.0.2", 00:15:49.176 "trsvcid": "4420" 00:15:49.176 }, 00:15:49.176 "peer_address": { 00:15:49.176 "trtype": "TCP", 00:15:49.176 "adrfam": "IPv4", 00:15:49.176 "traddr": "10.0.0.1", 00:15:49.176 "trsvcid": "55904" 00:15:49.176 }, 00:15:49.176 "auth": { 00:15:49.176 "state": "completed", 00:15:49.176 "digest": "sha384", 00:15:49.176 "dhgroup": "null" 00:15:49.176 } 00:15:49.176 } 00:15:49.176 ]' 00:15:49.176 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.435 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.435 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.435 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.435 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.435 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.435 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.435 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.694 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:49.694 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:50.624 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.624 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.624 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.624 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.624 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.624 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.624 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.624 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.624 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.882 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.140 00:15:51.140 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.140 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.140 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.398 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.656 { 00:15:51.656 "cntlid": 57, 00:15:51.656 "qid": 0, 00:15:51.656 "state": "enabled", 00:15:51.656 "thread": "nvmf_tgt_poll_group_000", 00:15:51.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:51.656 "listen_address": { 00:15:51.656 "trtype": "TCP", 00:15:51.656 "adrfam": "IPv4", 00:15:51.656 "traddr": "10.0.0.2", 00:15:51.656 "trsvcid": "4420" 00:15:51.656 }, 00:15:51.656 "peer_address": { 00:15:51.656 "trtype": "TCP", 00:15:51.656 "adrfam": "IPv4", 00:15:51.656 "traddr": "10.0.0.1", 00:15:51.656 "trsvcid": "55928" 00:15:51.656 }, 00:15:51.656 "auth": { 00:15:51.656 "state": "completed", 00:15:51.656 "digest": "sha384", 00:15:51.656 "dhgroup": "ffdhe2048" 00:15:51.656 } 00:15:51.656 } 00:15:51.656 ]' 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.656 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.915 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:51.915 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:15:52.847 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.847 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.847 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.847 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.847 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.847 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.847 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.847 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.105 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.672 00:15:53.672 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.672 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.672 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.930 { 00:15:53.930 "cntlid": 59, 00:15:53.930 "qid": 0, 00:15:53.930 "state": "enabled", 00:15:53.930 "thread": "nvmf_tgt_poll_group_000", 00:15:53.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:53.930 "listen_address": { 00:15:53.930 "trtype": "TCP", 00:15:53.930 "adrfam": "IPv4", 00:15:53.930 "traddr": "10.0.0.2", 00:15:53.930 "trsvcid": "4420" 00:15:53.930 }, 00:15:53.930 "peer_address": { 00:15:53.930 "trtype": "TCP", 00:15:53.930 "adrfam": "IPv4", 00:15:53.930 "traddr": "10.0.0.1", 00:15:53.930 "trsvcid": "58700" 00:15:53.930 }, 00:15:53.930 "auth": { 00:15:53.930 "state": "completed", 00:15:53.930 "digest": "sha384", 00:15:53.930 "dhgroup": "ffdhe2048" 00:15:53.930 } 00:15:53.930 } 00:15:53.930 ]' 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.930 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.188 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:54.188 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:15:55.121 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.121 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.121 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.121 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.121 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.121 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.121 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:55.121 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.380 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.946 00:15:55.946 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.946 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.946 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.204 { 00:15:56.204 "cntlid": 61, 00:15:56.204 "qid": 0, 00:15:56.204 "state": "enabled", 00:15:56.204 "thread": "nvmf_tgt_poll_group_000", 00:15:56.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:56.204 "listen_address": { 00:15:56.204 "trtype": "TCP", 00:15:56.204 "adrfam": "IPv4", 00:15:56.204 "traddr": "10.0.0.2", 00:15:56.204 "trsvcid": "4420" 00:15:56.204 }, 00:15:56.204 "peer_address": { 00:15:56.204 "trtype": "TCP", 00:15:56.204 "adrfam": "IPv4", 00:15:56.204 "traddr": "10.0.0.1", 00:15:56.204 "trsvcid": "58728" 00:15:56.204 }, 00:15:56.204 "auth": { 00:15:56.204 "state": "completed", 00:15:56.204 "digest": "sha384", 00:15:56.204 "dhgroup": "ffdhe2048" 00:15:56.204 } 00:15:56.204 } 00:15:56.204 ]' 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.204 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.462 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:56.462 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:15:57.395 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.395 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.395 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.395 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.395 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.395 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.395 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.395 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.653 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.217 00:15:58.217 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.217 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.217 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.474 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.474 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.474 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.474 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.474 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.474 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.474 { 00:15:58.474 "cntlid": 63, 00:15:58.474 "qid": 0, 00:15:58.474 "state": "enabled", 00:15:58.474 "thread": "nvmf_tgt_poll_group_000", 00:15:58.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:58.474 "listen_address": { 00:15:58.474 "trtype": "TCP", 00:15:58.474 "adrfam": "IPv4", 00:15:58.474 "traddr": "10.0.0.2", 00:15:58.474 "trsvcid": "4420" 00:15:58.474 }, 00:15:58.474 "peer_address": { 00:15:58.474 "trtype": "TCP", 00:15:58.474 "adrfam": "IPv4", 00:15:58.474 "traddr": "10.0.0.1", 00:15:58.474 "trsvcid": "58758" 00:15:58.474 }, 00:15:58.474 "auth": { 00:15:58.474 "state": "completed", 00:15:58.474 "digest": "sha384", 00:15:58.474 "dhgroup": "ffdhe2048" 00:15:58.474 } 00:15:58.474 } 00:15:58.474 ]' 00:15:58.474 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.474 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.474 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.474 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.474 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.474 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.474 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.474 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.758 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:58.758 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:15:59.692 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.692 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.692 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.692 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.692 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.692 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.692 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.692 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:59.692 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.950 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.208 00:16:00.208 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.208 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.208 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.466 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.466 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.466 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.466 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.466 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.466 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.466 { 00:16:00.466 "cntlid": 65, 00:16:00.466 "qid": 0, 00:16:00.466 "state": "enabled", 00:16:00.466 "thread": "nvmf_tgt_poll_group_000", 00:16:00.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:00.466 "listen_address": { 00:16:00.466 "trtype": "TCP", 00:16:00.466 "adrfam": "IPv4", 00:16:00.466 "traddr": "10.0.0.2", 00:16:00.466 "trsvcid": "4420" 00:16:00.466 }, 00:16:00.466 "peer_address": { 00:16:00.466 "trtype": "TCP", 00:16:00.466 "adrfam": "IPv4", 00:16:00.466 "traddr": "10.0.0.1", 00:16:00.466 "trsvcid": "58790" 00:16:00.466 }, 00:16:00.466 "auth": { 00:16:00.466 "state": "completed", 00:16:00.466 "digest": "sha384", 00:16:00.466 "dhgroup": "ffdhe3072" 00:16:00.466 } 00:16:00.466 } 00:16:00.466 ]' 00:16:00.466 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.724 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.724 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.724 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.724 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.724 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.724 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.724 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.982 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:00.982 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:01.914 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.914 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.914 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.914 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.914 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.914 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.914 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.914 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.172 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.430 00:16:02.430 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.430 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.430 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.687 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.687 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.687 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.687 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.687 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.687 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.687 { 00:16:02.687 "cntlid": 67, 00:16:02.687 "qid": 0, 00:16:02.687 "state": "enabled", 00:16:02.687 "thread": "nvmf_tgt_poll_group_000", 00:16:02.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:02.687 "listen_address": { 00:16:02.687 "trtype": "TCP", 00:16:02.687 "adrfam": "IPv4", 00:16:02.687 "traddr": "10.0.0.2", 00:16:02.687 "trsvcid": "4420" 00:16:02.687 }, 00:16:02.687 "peer_address": { 00:16:02.687 "trtype": "TCP", 00:16:02.687 "adrfam": "IPv4", 00:16:02.687 "traddr": "10.0.0.1", 00:16:02.687 "trsvcid": "58830" 00:16:02.687 }, 00:16:02.687 "auth": { 00:16:02.687 "state": "completed", 00:16:02.687 "digest": "sha384", 00:16:02.687 "dhgroup": "ffdhe3072" 00:16:02.687 } 00:16:02.687 } 00:16:02.687 ]' 00:16:02.687 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.944 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.944 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.944 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.944 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.944 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.944 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.944 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.202 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:03.202 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:04.136 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.136 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.136 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.136 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.136 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.136 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.136 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.136 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.393 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.650 00:16:04.650 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.650 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.650 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.908 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.908 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.908 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.908 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.166 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.166 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.166 { 00:16:05.166 "cntlid": 69, 00:16:05.166 "qid": 0, 00:16:05.166 "state": "enabled", 00:16:05.166 "thread": "nvmf_tgt_poll_group_000", 00:16:05.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:05.166 "listen_address": { 00:16:05.166 "trtype": "TCP", 00:16:05.166 "adrfam": "IPv4", 00:16:05.166 "traddr": "10.0.0.2", 00:16:05.166 "trsvcid": "4420" 00:16:05.166 }, 00:16:05.166 "peer_address": { 00:16:05.166 "trtype": "TCP", 00:16:05.166 "adrfam": "IPv4", 00:16:05.166 "traddr": "10.0.0.1", 00:16:05.166 "trsvcid": "39838" 00:16:05.166 }, 00:16:05.166 "auth": { 00:16:05.166 "state": "completed", 00:16:05.166 "digest": "sha384", 00:16:05.166 "dhgroup": "ffdhe3072" 00:16:05.166 } 00:16:05.166 } 00:16:05.166 ]' 00:16:05.166 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.166 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.167 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.167 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.167 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.167 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.167 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.167 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.424 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:05.424 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:06.354 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.354 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.354 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.354 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.354 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.354 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.355 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.355 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.918 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.174 00:16:07.174 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.174 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.174 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.504 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.504 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.504 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.504 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.504 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.504 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.504 { 00:16:07.504 "cntlid": 71, 00:16:07.504 "qid": 0, 00:16:07.504 "state": "enabled", 00:16:07.504 "thread": "nvmf_tgt_poll_group_000", 00:16:07.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:07.504 "listen_address": { 00:16:07.504 "trtype": "TCP", 00:16:07.504 "adrfam": "IPv4", 00:16:07.504 "traddr": "10.0.0.2", 00:16:07.504 "trsvcid": "4420" 00:16:07.504 }, 00:16:07.504 "peer_address": { 00:16:07.504 "trtype": "TCP", 00:16:07.504 "adrfam": "IPv4", 00:16:07.504 "traddr": "10.0.0.1", 00:16:07.504 "trsvcid": "39858" 00:16:07.504 }, 00:16:07.504 "auth": { 00:16:07.504 "state": "completed", 00:16:07.504 "digest": "sha384", 00:16:07.504 "dhgroup": "ffdhe3072" 00:16:07.504 } 00:16:07.504 } 00:16:07.504 ]' 00:16:07.504 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.504 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.504 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.504 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.504 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.504 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.504 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.504 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.811 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:07.811 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:08.745 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.745 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.745 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.745 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.745 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.745 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.745 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.745 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:08.745 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.003 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.261 00:16:09.261 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.261 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.261 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.521 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.521 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.521 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.521 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.778 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.778 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.778 { 00:16:09.778 "cntlid": 73, 00:16:09.778 "qid": 0, 00:16:09.778 "state": "enabled", 00:16:09.778 "thread": "nvmf_tgt_poll_group_000", 00:16:09.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:09.778 "listen_address": { 00:16:09.778 "trtype": "TCP", 00:16:09.778 "adrfam": "IPv4", 00:16:09.778 "traddr": "10.0.0.2", 00:16:09.778 "trsvcid": "4420" 00:16:09.778 }, 00:16:09.778 "peer_address": { 00:16:09.778 "trtype": "TCP", 00:16:09.778 "adrfam": "IPv4", 00:16:09.778 "traddr": "10.0.0.1", 00:16:09.778 "trsvcid": "39896" 00:16:09.778 }, 00:16:09.778 "auth": { 00:16:09.778 "state": "completed", 00:16:09.778 "digest": "sha384", 00:16:09.778 "dhgroup": "ffdhe4096" 00:16:09.778 } 00:16:09.778 } 00:16:09.778 ]' 00:16:09.778 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.778 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.778 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.778 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.778 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.778 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.778 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.778 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.036 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:10.036 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:10.971 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.971 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.971 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.971 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.971 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.971 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.971 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.971 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.229 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.488 00:16:11.488 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.488 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.488 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.747 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.747 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.747 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.747 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.005 { 00:16:12.005 "cntlid": 75, 00:16:12.005 "qid": 0, 00:16:12.005 "state": "enabled", 00:16:12.005 "thread": "nvmf_tgt_poll_group_000", 00:16:12.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:12.005 "listen_address": { 00:16:12.005 "trtype": "TCP", 00:16:12.005 "adrfam": "IPv4", 00:16:12.005 "traddr": "10.0.0.2", 00:16:12.005 "trsvcid": "4420" 00:16:12.005 }, 00:16:12.005 "peer_address": { 00:16:12.005 "trtype": "TCP", 00:16:12.005 "adrfam": "IPv4", 00:16:12.005 "traddr": "10.0.0.1", 00:16:12.005 "trsvcid": "39930" 00:16:12.005 }, 00:16:12.005 "auth": { 00:16:12.005 "state": "completed", 00:16:12.005 "digest": "sha384", 00:16:12.005 "dhgroup": "ffdhe4096" 00:16:12.005 } 00:16:12.005 } 00:16:12.005 ]' 00:16:12.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.263 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:12.263 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:13.197 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.197 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.197 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.197 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.197 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.197 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.197 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.197 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.456 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.020 00:16:14.020 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.020 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.020 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.020 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.021 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.021 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.021 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.021 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.021 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.021 { 00:16:14.021 "cntlid": 77, 00:16:14.021 "qid": 0, 00:16:14.021 "state": "enabled", 00:16:14.021 "thread": "nvmf_tgt_poll_group_000", 00:16:14.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:14.021 "listen_address": { 00:16:14.021 "trtype": "TCP", 00:16:14.021 "adrfam": "IPv4", 00:16:14.021 "traddr": "10.0.0.2", 00:16:14.021 "trsvcid": "4420" 00:16:14.021 }, 00:16:14.021 "peer_address": { 00:16:14.021 "trtype": "TCP", 00:16:14.021 "adrfam": "IPv4", 00:16:14.021 "traddr": "10.0.0.1", 00:16:14.021 "trsvcid": "34836" 00:16:14.021 }, 00:16:14.021 "auth": { 00:16:14.021 "state": "completed", 00:16:14.021 "digest": "sha384", 00:16:14.021 "dhgroup": "ffdhe4096" 00:16:14.021 } 00:16:14.021 } 00:16:14.021 ]' 00:16:14.021 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.278 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.278 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.278 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.278 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.278 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.278 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.278 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.535 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:14.536 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:15.468 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.468 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.468 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.468 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.468 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.468 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.469 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.469 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.726 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.984 00:16:16.241 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.241 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.241 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.499 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.499 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.499 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.499 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.499 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.499 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.499 { 00:16:16.499 "cntlid": 79, 00:16:16.499 "qid": 0, 00:16:16.499 "state": "enabled", 00:16:16.499 "thread": "nvmf_tgt_poll_group_000", 00:16:16.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:16.499 "listen_address": { 00:16:16.499 "trtype": "TCP", 00:16:16.499 "adrfam": "IPv4", 00:16:16.499 "traddr": "10.0.0.2", 00:16:16.499 "trsvcid": "4420" 00:16:16.499 }, 00:16:16.499 "peer_address": { 00:16:16.499 "trtype": "TCP", 00:16:16.499 "adrfam": "IPv4", 00:16:16.499 "traddr": "10.0.0.1", 00:16:16.499 "trsvcid": "34866" 00:16:16.499 }, 00:16:16.499 "auth": { 00:16:16.499 "state": "completed", 00:16:16.499 "digest": "sha384", 00:16:16.499 "dhgroup": "ffdhe4096" 00:16:16.499 } 00:16:16.499 } 00:16:16.499 ]' 00:16:16.499 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.499 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.499 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.499 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:16.499 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.499 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.499 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.499 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.756 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:16.756 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:17.687 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.687 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.687 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.687 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.687 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.687 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.687 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.687 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.687 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.944 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.509 00:16:18.509 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.509 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.509 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.767 { 00:16:18.767 "cntlid": 81, 00:16:18.767 "qid": 0, 00:16:18.767 "state": "enabled", 00:16:18.767 "thread": "nvmf_tgt_poll_group_000", 00:16:18.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:18.767 "listen_address": { 00:16:18.767 "trtype": "TCP", 00:16:18.767 "adrfam": "IPv4", 00:16:18.767 "traddr": "10.0.0.2", 00:16:18.767 "trsvcid": "4420" 00:16:18.767 }, 00:16:18.767 "peer_address": { 00:16:18.767 "trtype": "TCP", 00:16:18.767 "adrfam": "IPv4", 00:16:18.767 "traddr": "10.0.0.1", 00:16:18.767 "trsvcid": "34894" 00:16:18.767 }, 00:16:18.767 "auth": { 00:16:18.767 "state": "completed", 00:16:18.767 "digest": "sha384", 00:16:18.767 "dhgroup": "ffdhe6144" 00:16:18.767 } 00:16:18.767 } 00:16:18.767 ]' 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.767 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.026 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.026 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.026 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.284 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:19.284 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:20.215 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.215 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.215 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.215 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.215 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.215 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.215 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:20.215 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.473 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.036 00:16:21.036 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.036 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.036 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.293 { 00:16:21.293 "cntlid": 83, 00:16:21.293 "qid": 0, 00:16:21.293 "state": "enabled", 00:16:21.293 "thread": "nvmf_tgt_poll_group_000", 00:16:21.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:21.293 "listen_address": { 00:16:21.293 "trtype": "TCP", 00:16:21.293 "adrfam": "IPv4", 00:16:21.293 "traddr": "10.0.0.2", 00:16:21.293 "trsvcid": "4420" 00:16:21.293 }, 00:16:21.293 "peer_address": { 00:16:21.293 "trtype": "TCP", 00:16:21.293 "adrfam": "IPv4", 00:16:21.293 "traddr": "10.0.0.1", 00:16:21.293 "trsvcid": "34918" 00:16:21.293 }, 00:16:21.293 "auth": { 00:16:21.293 "state": "completed", 00:16:21.293 "digest": "sha384", 00:16:21.293 "dhgroup": "ffdhe6144" 00:16:21.293 } 00:16:21.293 } 00:16:21.293 ]' 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.293 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.551 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.551 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.551 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.808 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:21.808 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:22.740 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.740 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.740 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.740 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.740 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.740 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.740 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.740 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.998 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.563 00:16:23.563 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.563 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.563 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.821 { 00:16:23.821 "cntlid": 85, 00:16:23.821 "qid": 0, 00:16:23.821 "state": "enabled", 00:16:23.821 "thread": "nvmf_tgt_poll_group_000", 00:16:23.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:23.821 "listen_address": { 00:16:23.821 "trtype": "TCP", 00:16:23.821 "adrfam": "IPv4", 00:16:23.821 "traddr": "10.0.0.2", 00:16:23.821 "trsvcid": "4420" 00:16:23.821 }, 00:16:23.821 "peer_address": { 00:16:23.821 "trtype": "TCP", 00:16:23.821 "adrfam": "IPv4", 00:16:23.821 "traddr": "10.0.0.1", 00:16:23.821 "trsvcid": "59924" 00:16:23.821 }, 00:16:23.821 "auth": { 00:16:23.821 "state": "completed", 00:16:23.821 "digest": "sha384", 00:16:23.821 "dhgroup": "ffdhe6144" 00:16:23.821 } 00:16:23.821 } 00:16:23.821 ]' 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.821 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.078 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:24.078 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:25.012 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.012 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.012 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.012 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.012 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.012 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.012 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.012 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.271 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.837 00:16:25.837 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.837 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.837 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.095 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.095 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.095 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.095 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.095 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.095 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.095 { 00:16:26.095 "cntlid": 87, 00:16:26.095 "qid": 0, 00:16:26.095 "state": "enabled", 00:16:26.095 "thread": "nvmf_tgt_poll_group_000", 00:16:26.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:26.095 "listen_address": { 00:16:26.095 "trtype": "TCP", 00:16:26.095 "adrfam": "IPv4", 00:16:26.095 "traddr": "10.0.0.2", 00:16:26.095 "trsvcid": "4420" 00:16:26.095 }, 00:16:26.095 "peer_address": { 00:16:26.095 "trtype": "TCP", 00:16:26.095 "adrfam": "IPv4", 00:16:26.095 "traddr": "10.0.0.1", 00:16:26.095 "trsvcid": "59952" 00:16:26.095 }, 00:16:26.095 "auth": { 00:16:26.095 "state": "completed", 00:16:26.095 "digest": "sha384", 00:16:26.095 "dhgroup": "ffdhe6144" 00:16:26.095 } 00:16:26.095 } 00:16:26.095 ]' 00:16:26.095 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.353 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.353 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.353 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.353 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.353 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.353 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.353 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.610 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:26.611 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:27.547 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.547 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.547 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.547 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.547 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.547 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.547 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.547 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:27.547 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.804 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.737 00:16:28.737 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.737 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.737 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.994 { 00:16:28.994 "cntlid": 89, 00:16:28.994 "qid": 0, 00:16:28.994 "state": "enabled", 00:16:28.994 "thread": "nvmf_tgt_poll_group_000", 00:16:28.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:28.994 "listen_address": { 00:16:28.994 "trtype": "TCP", 00:16:28.994 "adrfam": "IPv4", 00:16:28.994 "traddr": "10.0.0.2", 00:16:28.994 "trsvcid": "4420" 00:16:28.994 }, 00:16:28.994 "peer_address": { 00:16:28.994 "trtype": "TCP", 00:16:28.994 "adrfam": "IPv4", 00:16:28.994 "traddr": "10.0.0.1", 00:16:28.994 "trsvcid": "59976" 00:16:28.994 }, 00:16:28.994 "auth": { 00:16:28.994 "state": "completed", 00:16:28.994 "digest": "sha384", 00:16:28.994 "dhgroup": "ffdhe8192" 00:16:28.994 } 00:16:28.994 } 00:16:28.994 ]' 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.994 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.252 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:29.252 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:30.184 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.184 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.184 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.184 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.184 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.184 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.184 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.184 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.441 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:30.441 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.441 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.442 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:30.442 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.442 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.442 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.442 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.442 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.442 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.442 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.442 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.442 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.374 00:16:31.374 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.374 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.374 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.631 { 00:16:31.631 "cntlid": 91, 00:16:31.631 "qid": 0, 00:16:31.631 "state": "enabled", 00:16:31.631 "thread": "nvmf_tgt_poll_group_000", 00:16:31.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:31.631 "listen_address": { 00:16:31.631 "trtype": "TCP", 00:16:31.631 "adrfam": "IPv4", 00:16:31.631 "traddr": "10.0.0.2", 00:16:31.631 "trsvcid": "4420" 00:16:31.631 }, 00:16:31.631 "peer_address": { 00:16:31.631 "trtype": "TCP", 00:16:31.631 "adrfam": "IPv4", 00:16:31.631 "traddr": "10.0.0.1", 00:16:31.631 "trsvcid": "59996" 00:16:31.631 }, 00:16:31.631 "auth": { 00:16:31.631 "state": "completed", 00:16:31.631 "digest": "sha384", 00:16:31.631 "dhgroup": "ffdhe8192" 00:16:31.631 } 00:16:31.631 } 00:16:31.631 ]' 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.631 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.888 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:31.888 12:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:32.870 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.870 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.870 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.870 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.870 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.870 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.870 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.870 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.433 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.364 00:16:34.364 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.364 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.364 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.364 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.364 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.364 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.364 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.364 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.364 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.364 { 00:16:34.364 "cntlid": 93, 00:16:34.364 "qid": 0, 00:16:34.364 "state": "enabled", 00:16:34.364 "thread": "nvmf_tgt_poll_group_000", 00:16:34.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:34.364 "listen_address": { 00:16:34.364 "trtype": "TCP", 00:16:34.364 "adrfam": "IPv4", 00:16:34.364 "traddr": "10.0.0.2", 00:16:34.364 "trsvcid": "4420" 00:16:34.364 }, 00:16:34.364 "peer_address": { 00:16:34.364 "trtype": "TCP", 00:16:34.364 "adrfam": "IPv4", 00:16:34.364 "traddr": "10.0.0.1", 00:16:34.364 "trsvcid": "43626" 00:16:34.364 }, 00:16:34.364 "auth": { 00:16:34.364 "state": "completed", 00:16:34.364 "digest": "sha384", 00:16:34.364 "dhgroup": "ffdhe8192" 00:16:34.364 } 00:16:34.364 } 00:16:34.364 ]' 00:16:34.364 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.364 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.364 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.621 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.621 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.621 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.621 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.621 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.879 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:34.879 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:35.811 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.811 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.811 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.811 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.811 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.811 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.811 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:35.811 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.075 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.078 00:16:37.078 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.078 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.078 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.078 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.078 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.078 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.078 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.078 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.078 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.078 { 00:16:37.078 "cntlid": 95, 00:16:37.078 "qid": 0, 00:16:37.078 "state": "enabled", 00:16:37.078 "thread": "nvmf_tgt_poll_group_000", 00:16:37.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:37.079 "listen_address": { 00:16:37.079 "trtype": "TCP", 00:16:37.079 "adrfam": "IPv4", 00:16:37.079 "traddr": "10.0.0.2", 00:16:37.079 "trsvcid": "4420" 00:16:37.079 }, 00:16:37.079 "peer_address": { 00:16:37.079 "trtype": "TCP", 00:16:37.079 "adrfam": "IPv4", 00:16:37.079 "traddr": "10.0.0.1", 00:16:37.079 "trsvcid": "43656" 00:16:37.079 }, 00:16:37.079 "auth": { 00:16:37.079 "state": "completed", 00:16:37.079 "digest": "sha384", 00:16:37.079 "dhgroup": "ffdhe8192" 00:16:37.079 } 00:16:37.079 } 00:16:37.079 ]' 00:16:37.079 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.079 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.079 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.338 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.338 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.338 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.338 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.338 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.596 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:37.596 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:38.526 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:38.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.526 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.784 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.041 00:16:39.041 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.041 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.041 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.298 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.298 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.298 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.298 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.298 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.298 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.298 { 00:16:39.298 "cntlid": 97, 00:16:39.298 "qid": 0, 00:16:39.298 "state": "enabled", 00:16:39.298 "thread": "nvmf_tgt_poll_group_000", 00:16:39.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:39.298 "listen_address": { 00:16:39.298 "trtype": "TCP", 00:16:39.298 "adrfam": "IPv4", 00:16:39.298 "traddr": "10.0.0.2", 00:16:39.298 "trsvcid": "4420" 00:16:39.298 }, 00:16:39.298 "peer_address": { 00:16:39.298 "trtype": "TCP", 00:16:39.298 "adrfam": "IPv4", 00:16:39.298 "traddr": "10.0.0.1", 00:16:39.298 "trsvcid": "43678" 00:16:39.298 }, 00:16:39.298 "auth": { 00:16:39.298 "state": "completed", 00:16:39.298 "digest": "sha512", 00:16:39.298 "dhgroup": "null" 00:16:39.298 } 00:16:39.298 } 00:16:39.298 ]' 00:16:39.298 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.298 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.298 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.556 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:39.556 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.556 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.556 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.556 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.813 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:39.813 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:40.744 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.744 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.744 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.744 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.744 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.744 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.744 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:40.744 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.001 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.258 00:16:41.258 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.258 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.258 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.515 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.515 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.515 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.515 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.515 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.515 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.515 { 00:16:41.515 "cntlid": 99, 00:16:41.515 "qid": 0, 00:16:41.515 "state": "enabled", 00:16:41.515 "thread": "nvmf_tgt_poll_group_000", 00:16:41.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:41.515 "listen_address": { 00:16:41.515 "trtype": "TCP", 00:16:41.516 "adrfam": "IPv4", 00:16:41.516 "traddr": "10.0.0.2", 00:16:41.516 "trsvcid": "4420" 00:16:41.516 }, 00:16:41.516 "peer_address": { 00:16:41.516 "trtype": "TCP", 00:16:41.516 "adrfam": "IPv4", 00:16:41.516 "traddr": "10.0.0.1", 00:16:41.516 "trsvcid": "43710" 00:16:41.516 }, 00:16:41.516 "auth": { 00:16:41.516 "state": "completed", 00:16:41.516 "digest": "sha512", 00:16:41.516 "dhgroup": "null" 00:16:41.516 } 00:16:41.516 } 00:16:41.516 ]' 00:16:41.516 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.772 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.772 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.772 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:41.772 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.772 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.772 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.772 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.030 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:42.030 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:42.963 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.963 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.963 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.963 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.963 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.963 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.963 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:42.963 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.221 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.479 00:16:43.479 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.479 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.479 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.737 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.737 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.737 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.737 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.737 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.738 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.738 { 00:16:43.738 "cntlid": 101, 00:16:43.738 "qid": 0, 00:16:43.738 "state": "enabled", 00:16:43.738 "thread": "nvmf_tgt_poll_group_000", 00:16:43.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:43.738 "listen_address": { 00:16:43.738 "trtype": "TCP", 00:16:43.738 "adrfam": "IPv4", 00:16:43.738 "traddr": "10.0.0.2", 00:16:43.738 "trsvcid": "4420" 00:16:43.738 }, 00:16:43.738 "peer_address": { 00:16:43.738 "trtype": "TCP", 00:16:43.738 "adrfam": "IPv4", 00:16:43.738 "traddr": "10.0.0.1", 00:16:43.738 "trsvcid": "44996" 00:16:43.738 }, 00:16:43.738 "auth": { 00:16:43.738 "state": "completed", 00:16:43.738 "digest": "sha512", 00:16:43.738 "dhgroup": "null" 00:16:43.738 } 00:16:43.738 } 00:16:43.738 ]' 00:16:43.738 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.738 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.738 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.996 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.996 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.996 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.996 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.996 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.254 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:44.254 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:45.188 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.188 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.188 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.188 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.188 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.188 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.188 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:45.188 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.446 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.703 00:16:45.703 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.703 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.703 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.962 { 00:16:45.962 "cntlid": 103, 00:16:45.962 "qid": 0, 00:16:45.962 "state": "enabled", 00:16:45.962 "thread": "nvmf_tgt_poll_group_000", 00:16:45.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:45.962 "listen_address": { 00:16:45.962 "trtype": "TCP", 00:16:45.962 "adrfam": "IPv4", 00:16:45.962 "traddr": "10.0.0.2", 00:16:45.962 "trsvcid": "4420" 00:16:45.962 }, 00:16:45.962 "peer_address": { 00:16:45.962 "trtype": "TCP", 00:16:45.962 "adrfam": "IPv4", 00:16:45.962 "traddr": "10.0.0.1", 00:16:45.962 "trsvcid": "45028" 00:16:45.962 }, 00:16:45.962 "auth": { 00:16:45.962 "state": "completed", 00:16:45.962 "digest": "sha512", 00:16:45.962 "dhgroup": "null" 00:16:45.962 } 00:16:45.962 } 00:16:45.962 ]' 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.962 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.220 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:46.220 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:47.150 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.150 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.150 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.150 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.150 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.150 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.150 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.150 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:47.150 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.407 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.971 00:16:47.971 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.971 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.971 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.227 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.227 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.227 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.227 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.228 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.228 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.228 { 00:16:48.228 "cntlid": 105, 00:16:48.228 "qid": 0, 00:16:48.228 "state": "enabled", 00:16:48.228 "thread": "nvmf_tgt_poll_group_000", 00:16:48.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:48.228 "listen_address": { 00:16:48.228 "trtype": "TCP", 00:16:48.228 "adrfam": "IPv4", 00:16:48.228 "traddr": "10.0.0.2", 00:16:48.228 "trsvcid": "4420" 00:16:48.228 }, 00:16:48.228 "peer_address": { 00:16:48.228 "trtype": "TCP", 00:16:48.228 "adrfam": "IPv4", 00:16:48.228 "traddr": "10.0.0.1", 00:16:48.228 "trsvcid": "45064" 00:16:48.228 }, 00:16:48.228 "auth": { 00:16:48.228 "state": "completed", 00:16:48.228 "digest": "sha512", 00:16:48.228 "dhgroup": "ffdhe2048" 00:16:48.228 } 00:16:48.228 } 00:16:48.228 ]' 00:16:48.228 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.228 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.228 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.228 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:48.228 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.228 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.228 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.228 12:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.485 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:48.485 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:49.417 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.417 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.417 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.417 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.417 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.417 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.417 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:49.417 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:49.675 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:49.675 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.675 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.675 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.675 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.675 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.675 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.675 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.676 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.676 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.676 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.676 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.676 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.240 00:16:50.240 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.240 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.240 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.498 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.498 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.498 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.498 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.498 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.498 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.498 { 00:16:50.498 "cntlid": 107, 00:16:50.498 "qid": 0, 00:16:50.498 "state": "enabled", 00:16:50.498 "thread": "nvmf_tgt_poll_group_000", 00:16:50.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:50.498 "listen_address": { 00:16:50.498 "trtype": "TCP", 00:16:50.498 "adrfam": "IPv4", 00:16:50.498 "traddr": "10.0.0.2", 00:16:50.498 "trsvcid": "4420" 00:16:50.498 }, 00:16:50.498 "peer_address": { 00:16:50.498 "trtype": "TCP", 00:16:50.498 "adrfam": "IPv4", 00:16:50.498 "traddr": "10.0.0.1", 00:16:50.498 "trsvcid": "45092" 00:16:50.498 }, 00:16:50.498 "auth": { 00:16:50.498 "state": "completed", 00:16:50.498 "digest": "sha512", 00:16:50.498 "dhgroup": "ffdhe2048" 00:16:50.498 } 00:16:50.498 } 00:16:50.498 ]' 00:16:50.498 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.498 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.498 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.498 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.498 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.498 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.498 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.498 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.756 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:50.756 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:51.687 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.687 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.687 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.687 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.687 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.687 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.687 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.687 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.944 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.508 00:16:52.508 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.508 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.508 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.765 { 00:16:52.765 "cntlid": 109, 00:16:52.765 "qid": 0, 00:16:52.765 "state": "enabled", 00:16:52.765 "thread": "nvmf_tgt_poll_group_000", 00:16:52.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:52.765 "listen_address": { 00:16:52.765 "trtype": "TCP", 00:16:52.765 "adrfam": "IPv4", 00:16:52.765 "traddr": "10.0.0.2", 00:16:52.765 "trsvcid": "4420" 00:16:52.765 }, 00:16:52.765 "peer_address": { 00:16:52.765 "trtype": "TCP", 00:16:52.765 "adrfam": "IPv4", 00:16:52.765 "traddr": "10.0.0.1", 00:16:52.765 "trsvcid": "45132" 00:16:52.765 }, 00:16:52.765 "auth": { 00:16:52.765 "state": "completed", 00:16:52.765 "digest": "sha512", 00:16:52.765 "dhgroup": "ffdhe2048" 00:16:52.765 } 00:16:52.765 } 00:16:52.765 ]' 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.765 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.022 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:53.022 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:16:53.953 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.953 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.953 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.953 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.953 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.953 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.953 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:53.953 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.211 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.469 00:16:54.469 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.469 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.469 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.726 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.726 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.726 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.726 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.726 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.726 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.726 { 00:16:54.726 "cntlid": 111, 00:16:54.726 "qid": 0, 00:16:54.726 "state": "enabled", 00:16:54.726 "thread": "nvmf_tgt_poll_group_000", 00:16:54.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:54.726 "listen_address": { 00:16:54.726 "trtype": "TCP", 00:16:54.726 "adrfam": "IPv4", 00:16:54.726 "traddr": "10.0.0.2", 00:16:54.726 "trsvcid": "4420" 00:16:54.726 }, 00:16:54.726 "peer_address": { 00:16:54.726 "trtype": "TCP", 00:16:54.726 "adrfam": "IPv4", 00:16:54.726 "traddr": "10.0.0.1", 00:16:54.726 "trsvcid": "40092" 00:16:54.726 }, 00:16:54.726 "auth": { 00:16:54.726 "state": "completed", 00:16:54.726 "digest": "sha512", 00:16:54.726 "dhgroup": "ffdhe2048" 00:16:54.726 } 00:16:54.726 } 00:16:54.726 ]' 00:16:54.726 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.984 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.984 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.984 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.984 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.984 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.984 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.984 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.242 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:55.242 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:16:56.175 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.175 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.175 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.175 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.175 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.175 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.175 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.175 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:56.175 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:56.433 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.434 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.999 00:16:56.999 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.999 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.999 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.257 { 00:16:57.257 "cntlid": 113, 00:16:57.257 "qid": 0, 00:16:57.257 "state": "enabled", 00:16:57.257 "thread": "nvmf_tgt_poll_group_000", 00:16:57.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:57.257 "listen_address": { 00:16:57.257 "trtype": "TCP", 00:16:57.257 "adrfam": "IPv4", 00:16:57.257 "traddr": "10.0.0.2", 00:16:57.257 "trsvcid": "4420" 00:16:57.257 }, 00:16:57.257 "peer_address": { 00:16:57.257 "trtype": "TCP", 00:16:57.257 "adrfam": "IPv4", 00:16:57.257 "traddr": "10.0.0.1", 00:16:57.257 "trsvcid": "40122" 00:16:57.257 }, 00:16:57.257 "auth": { 00:16:57.257 "state": "completed", 00:16:57.257 "digest": "sha512", 00:16:57.257 "dhgroup": "ffdhe3072" 00:16:57.257 } 00:16:57.257 } 00:16:57.257 ]' 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.257 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.514 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:57.514 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:16:58.447 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.447 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.447 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.447 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.447 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.447 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.447 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:58.447 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.706 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.963 00:16:59.221 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.221 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.221 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.480 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.480 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.480 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.480 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.480 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.480 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.480 { 00:16:59.480 "cntlid": 115, 00:16:59.480 "qid": 0, 00:16:59.480 "state": "enabled", 00:16:59.480 "thread": "nvmf_tgt_poll_group_000", 00:16:59.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:59.480 "listen_address": { 00:16:59.480 "trtype": "TCP", 00:16:59.480 "adrfam": "IPv4", 00:16:59.480 "traddr": "10.0.0.2", 00:16:59.480 "trsvcid": "4420" 00:16:59.480 }, 00:16:59.480 "peer_address": { 00:16:59.480 "trtype": "TCP", 00:16:59.480 "adrfam": "IPv4", 00:16:59.480 "traddr": "10.0.0.1", 00:16:59.480 "trsvcid": "40156" 00:16:59.480 }, 00:16:59.480 "auth": { 00:16:59.480 "state": "completed", 00:16:59.480 "digest": "sha512", 00:16:59.480 "dhgroup": "ffdhe3072" 00:16:59.480 } 00:16:59.480 } 00:16:59.480 ]' 00:16:59.480 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.480 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.480 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.480 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.480 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.480 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.480 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.480 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.738 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:16:59.738 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:17:00.670 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.670 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.670 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.670 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.670 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.670 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.670 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.670 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.927 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:00.927 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.927 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.927 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:00.927 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.927 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.927 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.927 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.927 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.928 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.928 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.928 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.928 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.494 00:17:01.494 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.494 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.494 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.752 { 00:17:01.752 "cntlid": 117, 00:17:01.752 "qid": 0, 00:17:01.752 "state": "enabled", 00:17:01.752 "thread": "nvmf_tgt_poll_group_000", 00:17:01.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:01.752 "listen_address": { 00:17:01.752 "trtype": "TCP", 00:17:01.752 "adrfam": "IPv4", 00:17:01.752 "traddr": "10.0.0.2", 00:17:01.752 "trsvcid": "4420" 00:17:01.752 }, 00:17:01.752 "peer_address": { 00:17:01.752 "trtype": "TCP", 00:17:01.752 "adrfam": "IPv4", 00:17:01.752 "traddr": "10.0.0.1", 00:17:01.752 "trsvcid": "40192" 00:17:01.752 }, 00:17:01.752 "auth": { 00:17:01.752 "state": "completed", 00:17:01.752 "digest": "sha512", 00:17:01.752 "dhgroup": "ffdhe3072" 00:17:01.752 } 00:17:01.752 } 00:17:01.752 ]' 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.752 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.010 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:17:02.010 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:17:02.943 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.944 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.944 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.944 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.944 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.944 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.944 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.944 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.202 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.768 00:17:03.768 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.768 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.768 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.768 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.768 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.768 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.768 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.768 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.768 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.768 { 00:17:03.768 "cntlid": 119, 00:17:03.768 "qid": 0, 00:17:03.768 "state": "enabled", 00:17:03.768 "thread": "nvmf_tgt_poll_group_000", 00:17:03.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:03.768 "listen_address": { 00:17:03.768 "trtype": "TCP", 00:17:03.768 "adrfam": "IPv4", 00:17:03.768 "traddr": "10.0.0.2", 00:17:03.768 "trsvcid": "4420" 00:17:03.768 }, 00:17:03.768 "peer_address": { 00:17:03.768 "trtype": "TCP", 00:17:03.768 "adrfam": "IPv4", 00:17:03.768 "traddr": "10.0.0.1", 00:17:03.768 "trsvcid": "47340" 00:17:03.768 }, 00:17:03.768 "auth": { 00:17:03.768 "state": "completed", 00:17:03.768 "digest": "sha512", 00:17:03.768 "dhgroup": "ffdhe3072" 00:17:03.768 } 00:17:03.768 } 00:17:03.768 ]' 00:17:03.768 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.026 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.026 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.026 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.026 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.026 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.026 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.026 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.286 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:04.286 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:05.218 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.218 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.218 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.218 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.218 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.218 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.218 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.218 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.218 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.476 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.733 00:17:05.992 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.992 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.992 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.286 { 00:17:06.286 "cntlid": 121, 00:17:06.286 "qid": 0, 00:17:06.286 "state": "enabled", 00:17:06.286 "thread": "nvmf_tgt_poll_group_000", 00:17:06.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:06.286 "listen_address": { 00:17:06.286 "trtype": "TCP", 00:17:06.286 "adrfam": "IPv4", 00:17:06.286 "traddr": "10.0.0.2", 00:17:06.286 "trsvcid": "4420" 00:17:06.286 }, 00:17:06.286 "peer_address": { 00:17:06.286 "trtype": "TCP", 00:17:06.286 "adrfam": "IPv4", 00:17:06.286 "traddr": "10.0.0.1", 00:17:06.286 "trsvcid": "47388" 00:17:06.286 }, 00:17:06.286 "auth": { 00:17:06.286 "state": "completed", 00:17:06.286 "digest": "sha512", 00:17:06.286 "dhgroup": "ffdhe4096" 00:17:06.286 } 00:17:06.286 } 00:17:06.286 ]' 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.286 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.287 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.566 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:17:06.566 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:17:07.503 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.503 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.503 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.503 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.503 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.503 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.503 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:07.503 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.761 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.020 00:17:08.020 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.020 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.020 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.278 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.278 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.278 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.278 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.536 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.536 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.536 { 00:17:08.536 "cntlid": 123, 00:17:08.536 "qid": 0, 00:17:08.536 "state": "enabled", 00:17:08.536 "thread": "nvmf_tgt_poll_group_000", 00:17:08.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:08.536 "listen_address": { 00:17:08.536 "trtype": "TCP", 00:17:08.536 "adrfam": "IPv4", 00:17:08.536 "traddr": "10.0.0.2", 00:17:08.536 "trsvcid": "4420" 00:17:08.536 }, 00:17:08.536 "peer_address": { 00:17:08.536 "trtype": "TCP", 00:17:08.536 "adrfam": "IPv4", 00:17:08.536 "traddr": "10.0.0.1", 00:17:08.536 "trsvcid": "47408" 00:17:08.536 }, 00:17:08.536 "auth": { 00:17:08.536 "state": "completed", 00:17:08.536 "digest": "sha512", 00:17:08.536 "dhgroup": "ffdhe4096" 00:17:08.536 } 00:17:08.536 } 00:17:08.536 ]' 00:17:08.536 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.536 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.536 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.536 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.536 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.536 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.536 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.536 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.794 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:17:08.794 12:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:17:09.728 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.728 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.728 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.728 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.728 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.728 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.728 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.728 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.985 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:09.985 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.985 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.985 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:09.986 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.986 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.986 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.986 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.986 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.986 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.986 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.986 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.986 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.551 00:17:10.551 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.551 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.551 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.809 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.810 { 00:17:10.810 "cntlid": 125, 00:17:10.810 "qid": 0, 00:17:10.810 "state": "enabled", 00:17:10.810 "thread": "nvmf_tgt_poll_group_000", 00:17:10.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:10.810 "listen_address": { 00:17:10.810 "trtype": "TCP", 00:17:10.810 "adrfam": "IPv4", 00:17:10.810 "traddr": "10.0.0.2", 00:17:10.810 "trsvcid": "4420" 00:17:10.810 }, 00:17:10.810 "peer_address": { 00:17:10.810 "trtype": "TCP", 00:17:10.810 "adrfam": "IPv4", 00:17:10.810 "traddr": "10.0.0.1", 00:17:10.810 "trsvcid": "47430" 00:17:10.810 }, 00:17:10.810 "auth": { 00:17:10.810 "state": "completed", 00:17:10.810 "digest": "sha512", 00:17:10.810 "dhgroup": "ffdhe4096" 00:17:10.810 } 00:17:10.810 } 00:17:10.810 ]' 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.810 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.068 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:17:11.068 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:17:12.002 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.002 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.002 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.002 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.002 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.002 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.002 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:12.002 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.260 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.825 00:17:12.825 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.825 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.825 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.084 { 00:17:13.084 "cntlid": 127, 00:17:13.084 "qid": 0, 00:17:13.084 "state": "enabled", 00:17:13.084 "thread": "nvmf_tgt_poll_group_000", 00:17:13.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:13.084 "listen_address": { 00:17:13.084 "trtype": "TCP", 00:17:13.084 "adrfam": "IPv4", 00:17:13.084 "traddr": "10.0.0.2", 00:17:13.084 "trsvcid": "4420" 00:17:13.084 }, 00:17:13.084 "peer_address": { 00:17:13.084 "trtype": "TCP", 00:17:13.084 "adrfam": "IPv4", 00:17:13.084 "traddr": "10.0.0.1", 00:17:13.084 "trsvcid": "56772" 00:17:13.084 }, 00:17:13.084 "auth": { 00:17:13.084 "state": "completed", 00:17:13.084 "digest": "sha512", 00:17:13.084 "dhgroup": "ffdhe4096" 00:17:13.084 } 00:17:13.084 } 00:17:13.084 ]' 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.084 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.342 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:13.342 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:14.275 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.275 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.275 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.275 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.276 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.276 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.276 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.276 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.276 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.534 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.102 00:17:15.102 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.102 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.102 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.360 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.361 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.361 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.361 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.361 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.361 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.361 { 00:17:15.361 "cntlid": 129, 00:17:15.361 "qid": 0, 00:17:15.361 "state": "enabled", 00:17:15.361 "thread": "nvmf_tgt_poll_group_000", 00:17:15.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:15.361 "listen_address": { 00:17:15.361 "trtype": "TCP", 00:17:15.361 "adrfam": "IPv4", 00:17:15.361 "traddr": "10.0.0.2", 00:17:15.361 "trsvcid": "4420" 00:17:15.361 }, 00:17:15.361 "peer_address": { 00:17:15.361 "trtype": "TCP", 00:17:15.361 "adrfam": "IPv4", 00:17:15.361 "traddr": "10.0.0.1", 00:17:15.361 "trsvcid": "56802" 00:17:15.361 }, 00:17:15.361 "auth": { 00:17:15.361 "state": "completed", 00:17:15.361 "digest": "sha512", 00:17:15.361 "dhgroup": "ffdhe6144" 00:17:15.361 } 00:17:15.361 } 00:17:15.361 ]' 00:17:15.361 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.619 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.619 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.619 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.619 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.619 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.619 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.619 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.876 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:17:15.877 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:17:16.810 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.810 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.810 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.810 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.810 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.810 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.810 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:16.810 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.070 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.637 00:17:17.637 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.637 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.637 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.894 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.894 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.894 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.894 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.894 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.894 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.894 { 00:17:17.894 "cntlid": 131, 00:17:17.894 "qid": 0, 00:17:17.894 "state": "enabled", 00:17:17.894 "thread": "nvmf_tgt_poll_group_000", 00:17:17.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:17.894 "listen_address": { 00:17:17.894 "trtype": "TCP", 00:17:17.894 "adrfam": "IPv4", 00:17:17.894 "traddr": "10.0.0.2", 00:17:17.894 "trsvcid": "4420" 00:17:17.894 }, 00:17:17.894 "peer_address": { 00:17:17.894 "trtype": "TCP", 00:17:17.894 "adrfam": "IPv4", 00:17:17.894 "traddr": "10.0.0.1", 00:17:17.894 "trsvcid": "56820" 00:17:17.894 }, 00:17:17.894 "auth": { 00:17:17.894 "state": "completed", 00:17:17.894 "digest": "sha512", 00:17:17.894 "dhgroup": "ffdhe6144" 00:17:17.894 } 00:17:17.894 } 00:17:17.894 ]' 00:17:17.894 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.894 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.894 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.894 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.152 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.152 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.152 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.152 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.410 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:17:18.410 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:17:19.341 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.341 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.341 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.341 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.341 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.341 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.341 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.341 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.599 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.163 00:17:20.163 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.163 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.163 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.421 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.421 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.421 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.421 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.421 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.421 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.421 { 00:17:20.421 "cntlid": 133, 00:17:20.421 "qid": 0, 00:17:20.421 "state": "enabled", 00:17:20.421 "thread": "nvmf_tgt_poll_group_000", 00:17:20.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:20.421 "listen_address": { 00:17:20.421 "trtype": "TCP", 00:17:20.421 "adrfam": "IPv4", 00:17:20.421 "traddr": "10.0.0.2", 00:17:20.421 "trsvcid": "4420" 00:17:20.421 }, 00:17:20.421 "peer_address": { 00:17:20.421 "trtype": "TCP", 00:17:20.421 "adrfam": "IPv4", 00:17:20.421 "traddr": "10.0.0.1", 00:17:20.421 "trsvcid": "56864" 00:17:20.421 }, 00:17:20.421 "auth": { 00:17:20.421 "state": "completed", 00:17:20.421 "digest": "sha512", 00:17:20.421 "dhgroup": "ffdhe6144" 00:17:20.421 } 00:17:20.421 } 00:17:20.421 ]' 00:17:20.421 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.421 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.421 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.421 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.421 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.421 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.421 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.421 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.677 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:17:20.677 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:17:21.607 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.607 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.607 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.607 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.607 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.607 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.607 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:21.607 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.172 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:22.172 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.172 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.172 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:22.172 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.172 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.172 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:22.172 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.172 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.173 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.173 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.173 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.173 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.737 00:17:22.738 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.738 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.738 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.996 { 00:17:22.996 "cntlid": 135, 00:17:22.996 "qid": 0, 00:17:22.996 "state": "enabled", 00:17:22.996 "thread": "nvmf_tgt_poll_group_000", 00:17:22.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:22.996 "listen_address": { 00:17:22.996 "trtype": "TCP", 00:17:22.996 "adrfam": "IPv4", 00:17:22.996 "traddr": "10.0.0.2", 00:17:22.996 "trsvcid": "4420" 00:17:22.996 }, 00:17:22.996 "peer_address": { 00:17:22.996 "trtype": "TCP", 00:17:22.996 "adrfam": "IPv4", 00:17:22.996 "traddr": "10.0.0.1", 00:17:22.996 "trsvcid": "56886" 00:17:22.996 }, 00:17:22.996 "auth": { 00:17:22.996 "state": "completed", 00:17:22.996 "digest": "sha512", 00:17:22.996 "dhgroup": "ffdhe6144" 00:17:22.996 } 00:17:22.996 } 00:17:22.996 ]' 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.996 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.561 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:23.562 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:24.509 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.509 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.509 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.509 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.509 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.509 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.509 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.509 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.509 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.509 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.444 00:17:25.444 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.444 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.444 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.732 { 00:17:25.732 "cntlid": 137, 00:17:25.732 "qid": 0, 00:17:25.732 "state": "enabled", 00:17:25.732 "thread": "nvmf_tgt_poll_group_000", 00:17:25.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:25.732 "listen_address": { 00:17:25.732 "trtype": "TCP", 00:17:25.732 "adrfam": "IPv4", 00:17:25.732 "traddr": "10.0.0.2", 00:17:25.732 "trsvcid": "4420" 00:17:25.732 }, 00:17:25.732 "peer_address": { 00:17:25.732 "trtype": "TCP", 00:17:25.732 "adrfam": "IPv4", 00:17:25.732 "traddr": "10.0.0.1", 00:17:25.732 "trsvcid": "39530" 00:17:25.732 }, 00:17:25.732 "auth": { 00:17:25.732 "state": "completed", 00:17:25.732 "digest": "sha512", 00:17:25.732 "dhgroup": "ffdhe8192" 00:17:25.732 } 00:17:25.732 } 00:17:25.732 ]' 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.732 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.298 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:17:26.298 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:17:27.231 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.231 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.231 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.231 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.231 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.231 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.231 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:27.231 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.489 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.422 00:17:28.422 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.422 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.422 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.422 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.422 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.422 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.422 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.679 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.679 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.679 { 00:17:28.679 "cntlid": 139, 00:17:28.679 "qid": 0, 00:17:28.679 "state": "enabled", 00:17:28.679 "thread": "nvmf_tgt_poll_group_000", 00:17:28.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:28.680 "listen_address": { 00:17:28.680 "trtype": "TCP", 00:17:28.680 "adrfam": "IPv4", 00:17:28.680 "traddr": "10.0.0.2", 00:17:28.680 "trsvcid": "4420" 00:17:28.680 }, 00:17:28.680 "peer_address": { 00:17:28.680 "trtype": "TCP", 00:17:28.680 "adrfam": "IPv4", 00:17:28.680 "traddr": "10.0.0.1", 00:17:28.680 "trsvcid": "39554" 00:17:28.680 }, 00:17:28.680 "auth": { 00:17:28.680 "state": "completed", 00:17:28.680 "digest": "sha512", 00:17:28.680 "dhgroup": "ffdhe8192" 00:17:28.680 } 00:17:28.680 } 00:17:28.680 ]' 00:17:28.680 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.680 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.680 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.680 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.680 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.680 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.680 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.680 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.938 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:17:28.938 12:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: --dhchap-ctrl-secret DHHC-1:02:YzliMGY1M2MyYmEzNDcwNThmYjdhODQ1NzBiNTgxZWE2NGFhOTdhODg2NWQ5YTY2s0FUFQ==: 00:17:29.871 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.871 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.871 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.871 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.871 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.871 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.871 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.871 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.128 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.061 00:17:31.061 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.061 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.061 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.318 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.318 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.319 { 00:17:31.319 "cntlid": 141, 00:17:31.319 "qid": 0, 00:17:31.319 "state": "enabled", 00:17:31.319 "thread": "nvmf_tgt_poll_group_000", 00:17:31.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:31.319 "listen_address": { 00:17:31.319 "trtype": "TCP", 00:17:31.319 "adrfam": "IPv4", 00:17:31.319 "traddr": "10.0.0.2", 00:17:31.319 "trsvcid": "4420" 00:17:31.319 }, 00:17:31.319 "peer_address": { 00:17:31.319 "trtype": "TCP", 00:17:31.319 "adrfam": "IPv4", 00:17:31.319 "traddr": "10.0.0.1", 00:17:31.319 "trsvcid": "39590" 00:17:31.319 }, 00:17:31.319 "auth": { 00:17:31.319 "state": "completed", 00:17:31.319 "digest": "sha512", 00:17:31.319 "dhgroup": "ffdhe8192" 00:17:31.319 } 00:17:31.319 } 00:17:31.319 ]' 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.319 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.577 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:17:31.577 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:01:MjkyYTY5MWI1ZWJlNDU0Yjc0ZGUzMWNiMDJkZGIzYWXQbtwp: 00:17:32.511 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.511 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.511 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.511 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.511 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.511 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.511 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.511 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.768 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.769 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.702 00:17:33.702 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.702 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.702 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.960 { 00:17:33.960 "cntlid": 143, 00:17:33.960 "qid": 0, 00:17:33.960 "state": "enabled", 00:17:33.960 "thread": "nvmf_tgt_poll_group_000", 00:17:33.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:33.960 "listen_address": { 00:17:33.960 "trtype": "TCP", 00:17:33.960 "adrfam": "IPv4", 00:17:33.960 "traddr": "10.0.0.2", 00:17:33.960 "trsvcid": "4420" 00:17:33.960 }, 00:17:33.960 "peer_address": { 00:17:33.960 "trtype": "TCP", 00:17:33.960 "adrfam": "IPv4", 00:17:33.960 "traddr": "10.0.0.1", 00:17:33.960 "trsvcid": "42848" 00:17:33.960 }, 00:17:33.960 "auth": { 00:17:33.960 "state": "completed", 00:17:33.960 "digest": "sha512", 00:17:33.960 "dhgroup": "ffdhe8192" 00:17:33.960 } 00:17:33.960 } 00:17:33.960 ]' 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.960 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.219 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.219 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.219 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.477 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:34.477 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.410 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.669 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.653 00:17:36.653 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.653 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.653 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.941 { 00:17:36.941 "cntlid": 145, 00:17:36.941 "qid": 0, 00:17:36.941 "state": "enabled", 00:17:36.941 "thread": "nvmf_tgt_poll_group_000", 00:17:36.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:36.941 "listen_address": { 00:17:36.941 "trtype": "TCP", 00:17:36.941 "adrfam": "IPv4", 00:17:36.941 "traddr": "10.0.0.2", 00:17:36.941 "trsvcid": "4420" 00:17:36.941 }, 00:17:36.941 "peer_address": { 00:17:36.941 "trtype": "TCP", 00:17:36.941 "adrfam": "IPv4", 00:17:36.941 "traddr": "10.0.0.1", 00:17:36.941 "trsvcid": "42880" 00:17:36.941 }, 00:17:36.941 "auth": { 00:17:36.941 "state": "completed", 00:17:36.941 "digest": "sha512", 00:17:36.941 "dhgroup": "ffdhe8192" 00:17:36.941 } 00:17:36.941 } 00:17:36.941 ]' 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.941 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.199 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:17:37.199 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZGU1NDNhYmZmZjljNjg2NTZhMzQ1ZjQ0OWMwYmExZGZkNGY4ZWM4OWNhZmQwZmRi05Yeaw==: --dhchap-ctrl-secret DHHC-1:03:NGZiNjI4N2NmOWFmM2RlZjJhMDcwNzM2NzM2NmU3ZTdiNGMwMzk5MDM1YzU0MDgxY2M2ZjNmMDdhNWQ1ODk4NtP3Jkc=: 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:38.133 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:39.066 request: 00:17:39.066 { 00:17:39.066 "name": "nvme0", 00:17:39.066 "trtype": "tcp", 00:17:39.066 "traddr": "10.0.0.2", 00:17:39.066 "adrfam": "ipv4", 00:17:39.066 "trsvcid": "4420", 00:17:39.066 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:39.066 "prchk_reftag": false, 00:17:39.066 "prchk_guard": false, 00:17:39.066 "hdgst": false, 00:17:39.066 "ddgst": false, 00:17:39.066 "dhchap_key": "key2", 00:17:39.066 "allow_unrecognized_csi": false, 00:17:39.066 "method": "bdev_nvme_attach_controller", 00:17:39.066 "req_id": 1 00:17:39.066 } 00:17:39.066 Got JSON-RPC error response 00:17:39.066 response: 00:17:39.066 { 00:17:39.066 "code": -5, 00:17:39.066 "message": "Input/output error" 00:17:39.066 } 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.066 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.631 request: 00:17:39.631 { 00:17:39.631 "name": "nvme0", 00:17:39.631 "trtype": "tcp", 00:17:39.631 "traddr": "10.0.0.2", 00:17:39.631 "adrfam": "ipv4", 00:17:39.631 "trsvcid": "4420", 00:17:39.631 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:39.631 "prchk_reftag": false, 00:17:39.631 "prchk_guard": false, 00:17:39.631 "hdgst": false, 00:17:39.631 "ddgst": false, 00:17:39.632 "dhchap_key": "key1", 00:17:39.632 "dhchap_ctrlr_key": "ckey2", 00:17:39.632 "allow_unrecognized_csi": false, 00:17:39.632 "method": "bdev_nvme_attach_controller", 00:17:39.632 "req_id": 1 00:17:39.632 } 00:17:39.632 Got JSON-RPC error response 00:17:39.632 response: 00:17:39.632 { 00:17:39.632 "code": -5, 00:17:39.632 "message": "Input/output error" 00:17:39.632 } 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.890 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.457 request: 00:17:40.457 { 00:17:40.457 "name": "nvme0", 00:17:40.457 "trtype": "tcp", 00:17:40.457 "traddr": "10.0.0.2", 00:17:40.457 "adrfam": "ipv4", 00:17:40.457 "trsvcid": "4420", 00:17:40.457 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:40.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:40.457 "prchk_reftag": false, 00:17:40.457 "prchk_guard": false, 00:17:40.457 "hdgst": false, 00:17:40.457 "ddgst": false, 00:17:40.457 "dhchap_key": "key1", 00:17:40.457 "dhchap_ctrlr_key": "ckey1", 00:17:40.457 "allow_unrecognized_csi": false, 00:17:40.457 "method": "bdev_nvme_attach_controller", 00:17:40.457 "req_id": 1 00:17:40.457 } 00:17:40.457 Got JSON-RPC error response 00:17:40.457 response: 00:17:40.457 { 00:17:40.457 "code": -5, 00:17:40.457 "message": "Input/output error" 00:17:40.457 } 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 598025 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 598025 ']' 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 598025 00:17:40.457 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 598025 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 598025' 00:17:40.715 killing process with pid 598025 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 598025 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 598025 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=620680 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 620680 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 620680 ']' 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:40.715 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 620680 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 620680 ']' 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:41.283 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.542 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:41.542 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:41.542 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:41.542 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.542 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.542 null0 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mcl 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ZbK ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZbK 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xfO 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Bwb ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bwb 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5VG 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.dRp ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dRp 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Y5L 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.542 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.914 nvme0n1 00:17:42.914 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.914 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.914 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.171 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.171 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.172 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.172 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.172 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.172 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.172 { 00:17:43.172 "cntlid": 1, 00:17:43.172 "qid": 0, 00:17:43.172 "state": "enabled", 00:17:43.172 "thread": "nvmf_tgt_poll_group_000", 00:17:43.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:43.172 "listen_address": { 00:17:43.172 "trtype": "TCP", 00:17:43.172 "adrfam": "IPv4", 00:17:43.172 "traddr": "10.0.0.2", 00:17:43.172 "trsvcid": "4420" 00:17:43.172 }, 00:17:43.172 "peer_address": { 00:17:43.172 "trtype": "TCP", 00:17:43.172 "adrfam": "IPv4", 00:17:43.172 "traddr": "10.0.0.1", 00:17:43.172 "trsvcid": "42938" 00:17:43.172 }, 00:17:43.172 "auth": { 00:17:43.172 "state": "completed", 00:17:43.172 "digest": "sha512", 00:17:43.172 "dhgroup": "ffdhe8192" 00:17:43.172 } 00:17:43.172 } 00:17:43.172 ]' 00:17:43.172 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.429 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.429 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.429 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.429 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.429 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.429 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.429 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.687 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:43.687 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:44.620 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:44.878 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:44.878 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:44.878 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:44.878 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:44.878 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.878 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:44.878 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.878 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.878 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.878 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.136 request: 00:17:45.136 { 00:17:45.136 "name": "nvme0", 00:17:45.136 "trtype": "tcp", 00:17:45.136 "traddr": "10.0.0.2", 00:17:45.136 "adrfam": "ipv4", 00:17:45.136 "trsvcid": "4420", 00:17:45.136 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:45.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:45.136 "prchk_reftag": false, 00:17:45.136 "prchk_guard": false, 00:17:45.136 "hdgst": false, 00:17:45.136 "ddgst": false, 00:17:45.136 "dhchap_key": "key3", 00:17:45.136 "allow_unrecognized_csi": false, 00:17:45.136 "method": "bdev_nvme_attach_controller", 00:17:45.136 "req_id": 1 00:17:45.136 } 00:17:45.136 Got JSON-RPC error response 00:17:45.136 response: 00:17:45.136 { 00:17:45.136 "code": -5, 00:17:45.136 "message": "Input/output error" 00:17:45.136 } 00:17:45.136 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:45.136 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.136 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.136 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.136 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:45.136 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:45.136 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:45.136 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:45.394 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:45.394 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:45.394 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:45.394 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:45.394 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.394 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:45.394 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.394 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.394 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.394 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.652 request: 00:17:45.652 { 00:17:45.652 "name": "nvme0", 00:17:45.652 "trtype": "tcp", 00:17:45.652 "traddr": "10.0.0.2", 00:17:45.652 "adrfam": "ipv4", 00:17:45.652 "trsvcid": "4420", 00:17:45.652 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:45.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:45.652 "prchk_reftag": false, 00:17:45.652 "prchk_guard": false, 00:17:45.652 "hdgst": false, 00:17:45.652 "ddgst": false, 00:17:45.652 "dhchap_key": "key3", 00:17:45.652 "allow_unrecognized_csi": false, 00:17:45.652 "method": "bdev_nvme_attach_controller", 00:17:45.652 "req_id": 1 00:17:45.652 } 00:17:45.652 Got JSON-RPC error response 00:17:45.652 response: 00:17:45.652 { 00:17:45.652 "code": -5, 00:17:45.652 "message": "Input/output error" 00:17:45.652 } 00:17:45.652 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:45.652 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.652 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.652 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.652 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:45.652 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:45.652 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:45.652 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.652 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.652 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:45.910 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:46.476 request: 00:17:46.476 { 00:17:46.476 "name": "nvme0", 00:17:46.476 "trtype": "tcp", 00:17:46.476 "traddr": "10.0.0.2", 00:17:46.476 "adrfam": "ipv4", 00:17:46.476 "trsvcid": "4420", 00:17:46.476 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:46.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:46.476 "prchk_reftag": false, 00:17:46.476 "prchk_guard": false, 00:17:46.476 "hdgst": false, 00:17:46.476 "ddgst": false, 00:17:46.476 "dhchap_key": "key0", 00:17:46.476 "dhchap_ctrlr_key": "key1", 00:17:46.476 "allow_unrecognized_csi": false, 00:17:46.476 "method": "bdev_nvme_attach_controller", 00:17:46.476 "req_id": 1 00:17:46.476 } 00:17:46.476 Got JSON-RPC error response 00:17:46.476 response: 00:17:46.476 { 00:17:46.476 "code": -5, 00:17:46.476 "message": "Input/output error" 00:17:46.476 } 00:17:46.476 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:46.476 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.476 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.476 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.476 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:46.476 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:46.476 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:47.044 nvme0n1 00:17:47.044 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:47.044 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:47.044 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.302 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.302 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.302 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.560 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:47.560 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.560 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.560 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.560 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:47.560 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:47.560 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:48.933 nvme0n1 00:17:48.933 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:48.933 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:48.933 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.233 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.233 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:49.233 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.233 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.233 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.233 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:49.233 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:49.234 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.490 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.490 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:49.490 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: --dhchap-ctrl-secret DHHC-1:03:YWMzN2I2OTM1MjdkMjNlMzY2N2Q3MGQ4OTRmNjk3ZGRkZWJhMmRiNzNhMTU4ZDRiNDk3MWUzNDAxM2JmNmYzMN40fiQ=: 00:17:50.421 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:50.421 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:50.421 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:50.421 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:50.421 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:50.421 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:50.421 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:50.421 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.421 12:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.678 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:50.678 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:50.678 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:50.678 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:50.678 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.678 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:50.678 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.678 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:50.679 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:50.679 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:51.243 request: 00:17:51.243 { 00:17:51.243 "name": "nvme0", 00:17:51.243 "trtype": "tcp", 00:17:51.243 "traddr": "10.0.0.2", 00:17:51.243 "adrfam": "ipv4", 00:17:51.243 "trsvcid": "4420", 00:17:51.243 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:51.243 "prchk_reftag": false, 00:17:51.243 "prchk_guard": false, 00:17:51.243 "hdgst": false, 00:17:51.243 "ddgst": false, 00:17:51.243 "dhchap_key": "key1", 00:17:51.243 "allow_unrecognized_csi": false, 00:17:51.243 "method": "bdev_nvme_attach_controller", 00:17:51.243 "req_id": 1 00:17:51.243 } 00:17:51.243 Got JSON-RPC error response 00:17:51.243 response: 00:17:51.243 { 00:17:51.243 "code": -5, 00:17:51.243 "message": "Input/output error" 00:17:51.243 } 00:17:51.243 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:51.243 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:51.243 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:51.243 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:51.243 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:51.243 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:51.243 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:52.614 nvme0n1 00:17:52.614 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:52.614 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:52.614 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.871 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.871 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.871 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.129 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.386 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.386 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.386 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.386 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:53.386 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:53.386 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:53.644 nvme0n1 00:17:53.644 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:53.644 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:53.644 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.901 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.901 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.901 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: '' 2s 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: ]] 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjUzZjY0NDMzZTViMzJjMjUyYzYzY2JlZDUxZmE3ODQekaFB: 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:54.159 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:56.056 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:56.056 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:56.056 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:56.056 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: 2s 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: ]] 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OWU5OGU0YTg4Nzc5YmM1OTk3MjVlN2E2YWI1OWUxNGNhZmY4OWM1MzFmNzNjYTc0qwJ/qQ==: 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:56.314 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:58.211 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:59.585 nvme0n1 00:17:59.585 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:59.585 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.585 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.585 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.585 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:59.585 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:00.517 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:00.517 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:00.517 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.775 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.775 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.775 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.775 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.775 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.775 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:00.775 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:01.033 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:01.033 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:01.033 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:01.291 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:02.225 request: 00:18:02.225 { 00:18:02.225 "name": "nvme0", 00:18:02.225 "dhchap_key": "key1", 00:18:02.225 "dhchap_ctrlr_key": "key3", 00:18:02.225 "method": "bdev_nvme_set_keys", 00:18:02.225 "req_id": 1 00:18:02.225 } 00:18:02.225 Got JSON-RPC error response 00:18:02.225 response: 00:18:02.225 { 00:18:02.225 "code": -13, 00:18:02.225 "message": "Permission denied" 00:18:02.225 } 00:18:02.225 12:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:02.225 12:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.225 12:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.225 12:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.225 12:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:02.225 12:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.225 12:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:02.482 12:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:02.482 12:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:03.417 12:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:03.417 12:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:03.417 12:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.676 12:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:03.676 12:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:03.676 12:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.676 12:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.676 12:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.676 12:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:03.676 12:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:03.676 12:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.049 nvme0n1 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.049 12:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.984 request: 00:18:05.984 { 00:18:05.984 "name": "nvme0", 00:18:05.984 "dhchap_key": "key2", 00:18:05.984 "dhchap_ctrlr_key": "key0", 00:18:05.984 "method": "bdev_nvme_set_keys", 00:18:05.984 "req_id": 1 00:18:05.984 } 00:18:05.984 Got JSON-RPC error response 00:18:05.984 response: 00:18:05.984 { 00:18:05.984 "code": -13, 00:18:05.984 "message": "Permission denied" 00:18:05.984 } 00:18:05.984 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:05.984 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:05.984 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:05.984 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:05.984 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:05.984 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.984 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:06.242 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:06.242 12:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:07.174 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:07.174 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:07.174 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 598050 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 598050 ']' 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 598050 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 598050 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 598050' 00:18:07.432 killing process with pid 598050 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 598050 00:18:07.432 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 598050 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.998 rmmod nvme_tcp 00:18:07.998 rmmod nvme_fabrics 00:18:07.998 rmmod nvme_keyring 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 620680 ']' 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 620680 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 620680 ']' 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 620680 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 620680 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 620680' 00:18:07.998 killing process with pid 620680 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 620680 00:18:07.998 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 620680 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.256 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.285 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:10.285 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.mcl /tmp/spdk.key-sha256.xfO /tmp/spdk.key-sha384.5VG /tmp/spdk.key-sha512.Y5L /tmp/spdk.key-sha512.ZbK /tmp/spdk.key-sha384.Bwb /tmp/spdk.key-sha256.dRp '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:10.285 00:18:10.285 real 3m30.570s 00:18:10.285 user 8m13.555s 00:18:10.285 sys 0m28.305s 00:18:10.285 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:10.285 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.285 ************************************ 00:18:10.285 END TEST nvmf_auth_target 00:18:10.285 ************************************ 00:18:10.285 12:29:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:10.285 12:29:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:10.285 12:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:10.285 12:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:10.285 12:29:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.285 ************************************ 00:18:10.285 START TEST nvmf_bdevio_no_huge 00:18:10.285 ************************************ 00:18:10.285 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:10.285 * Looking for test storage... 00:18:10.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:10.544 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:10.544 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:10.544 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.544 --rc genhtml_branch_coverage=1 00:18:10.544 --rc genhtml_function_coverage=1 00:18:10.544 --rc genhtml_legend=1 00:18:10.544 --rc geninfo_all_blocks=1 00:18:10.544 --rc geninfo_unexecuted_blocks=1 00:18:10.544 00:18:10.544 ' 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.544 --rc genhtml_branch_coverage=1 00:18:10.544 --rc genhtml_function_coverage=1 00:18:10.544 --rc genhtml_legend=1 00:18:10.544 --rc geninfo_all_blocks=1 00:18:10.544 --rc geninfo_unexecuted_blocks=1 00:18:10.544 00:18:10.544 ' 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.544 --rc genhtml_branch_coverage=1 00:18:10.544 --rc genhtml_function_coverage=1 00:18:10.544 --rc genhtml_legend=1 00:18:10.544 --rc geninfo_all_blocks=1 00:18:10.544 --rc geninfo_unexecuted_blocks=1 00:18:10.544 00:18:10.544 ' 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.544 --rc genhtml_branch_coverage=1 00:18:10.544 --rc genhtml_function_coverage=1 00:18:10.544 --rc genhtml_legend=1 00:18:10.544 --rc geninfo_all_blocks=1 00:18:10.544 --rc geninfo_unexecuted_blocks=1 00:18:10.544 00:18:10.544 ' 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.544 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:10.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:10.545 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:13.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:13.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.079 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:13.080 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:13.080 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:13.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:18:13.080 00:18:13.080 --- 10.0.0.2 ping statistics --- 00:18:13.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.080 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:18:13.080 00:18:13.080 --- 10.0.0.1 ping statistics --- 00:18:13.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.080 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=625938 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 625938 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 625938 ']' 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:13.080 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.080 [2024-10-30 12:29:45.373394] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:18:13.080 [2024-10-30 12:29:45.373486] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:13.080 [2024-10-30 12:29:45.454164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:13.080 [2024-10-30 12:29:45.513348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.080 [2024-10-30 12:29:45.513412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.080 [2024-10-30 12:29:45.513425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.080 [2024-10-30 12:29:45.513435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.080 [2024-10-30 12:29:45.513445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.080 [2024-10-30 12:29:45.514462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:13.080 [2024-10-30 12:29:45.514527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:13.081 [2024-10-30 12:29:45.514575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:13.081 [2024-10-30 12:29:45.514578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 [2024-10-30 12:29:45.671128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 Malloc0 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.081 [2024-10-30 12:29:45.709186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:13.081 { 00:18:13.081 "params": { 00:18:13.081 "name": "Nvme$subsystem", 00:18:13.081 "trtype": "$TEST_TRANSPORT", 00:18:13.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.081 "adrfam": "ipv4", 00:18:13.081 "trsvcid": "$NVMF_PORT", 00:18:13.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.081 "hdgst": ${hdgst:-false}, 00:18:13.081 "ddgst": ${ddgst:-false} 00:18:13.081 }, 00:18:13.081 "method": "bdev_nvme_attach_controller" 00:18:13.081 } 00:18:13.081 EOF 00:18:13.081 )") 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:13.081 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:13.081 "params": { 00:18:13.081 "name": "Nvme1", 00:18:13.081 "trtype": "tcp", 00:18:13.081 "traddr": "10.0.0.2", 00:18:13.081 "adrfam": "ipv4", 00:18:13.081 "trsvcid": "4420", 00:18:13.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.081 "hdgst": false, 00:18:13.081 "ddgst": false 00:18:13.081 }, 00:18:13.081 "method": "bdev_nvme_attach_controller" 00:18:13.081 }' 00:18:13.081 [2024-10-30 12:29:45.760980] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:18:13.081 [2024-10-30 12:29:45.761063] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid625967 ] 00:18:13.339 [2024-10-30 12:29:45.834265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:13.339 [2024-10-30 12:29:45.899748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.339 [2024-10-30 12:29:45.899797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.340 [2024-10-30 12:29:45.899801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.597 I/O targets: 00:18:13.597 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:13.597 00:18:13.597 00:18:13.597 CUnit - A unit testing framework for C - Version 2.1-3 00:18:13.597 http://cunit.sourceforge.net/ 00:18:13.597 00:18:13.597 00:18:13.597 Suite: bdevio tests on: Nvme1n1 00:18:13.856 Test: blockdev write read block ...passed 00:18:13.856 Test: blockdev write zeroes read block ...passed 00:18:13.856 Test: blockdev write zeroes read no split ...passed 00:18:13.856 Test: blockdev write zeroes read split ...passed 00:18:13.856 Test: blockdev write zeroes read split partial ...passed 00:18:13.856 Test: blockdev reset ...[2024-10-30 12:29:46.374394] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:13.856 [2024-10-30 12:29:46.374513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc936e0 (9): Bad file descriptor 00:18:13.856 [2024-10-30 12:29:46.403801] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:13.856 passed 00:18:13.856 Test: blockdev write read 8 blocks ...passed 00:18:13.856 Test: blockdev write read size > 128k ...passed 00:18:13.856 Test: blockdev write read invalid size ...passed 00:18:13.856 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:13.856 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:13.856 Test: blockdev write read max offset ...passed 00:18:14.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:14.115 Test: blockdev writev readv 8 blocks ...passed 00:18:14.115 Test: blockdev writev readv 30 x 1block ...passed 00:18:14.115 Test: blockdev writev readv block ...passed 00:18:14.115 Test: blockdev writev readv size > 128k ...passed 00:18:14.115 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:14.115 Test: blockdev comparev and writev ...[2024-10-30 12:29:46.614475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.115 [2024-10-30 12:29:46.614513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.115 [2024-10-30 12:29:46.614548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.115 [2024-10-30 12:29:46.614565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.115 [2024-10-30 12:29:46.614900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.115 [2024-10-30 12:29:46.614925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:14.115 [2024-10-30 12:29:46.614948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.115 [2024-10-30 12:29:46.614963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:14.115 [2024-10-30 12:29:46.615287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.115 [2024-10-30 12:29:46.615312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:14.115 [2024-10-30 12:29:46.615334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.115 [2024-10-30 12:29:46.615351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:14.115 [2024-10-30 12:29:46.615678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.115 [2024-10-30 12:29:46.615702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:14.115 [2024-10-30 12:29:46.615724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.115 [2024-10-30 12:29:46.615746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:14.115 passed 00:18:14.115 Test: blockdev nvme passthru rw ...passed 00:18:14.115 Test: blockdev nvme passthru vendor specific ...[2024-10-30 12:29:46.698511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.115 [2024-10-30 12:29:46.698548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:14.115 [2024-10-30 12:29:46.698696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.115 [2024-10-30 12:29:46.698718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:14.115 [2024-10-30 12:29:46.698848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.115 [2024-10-30 12:29:46.698870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:14.115 [2024-10-30 12:29:46.699006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.115 [2024-10-30 12:29:46.699030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:14.115 passed 00:18:14.115 Test: blockdev nvme admin passthru ...passed 00:18:14.115 Test: blockdev copy ...passed 00:18:14.115 00:18:14.115 Run Summary: Type Total Ran Passed Failed Inactive 00:18:14.115 suites 1 1 n/a 0 0 00:18:14.115 tests 23 23 23 0 0 00:18:14.116 asserts 152 152 152 0 n/a 00:18:14.116 00:18:14.116 Elapsed time = 1.007 seconds 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:14.681 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:14.681 rmmod nvme_tcp 00:18:14.681 rmmod nvme_fabrics 00:18:14.682 rmmod nvme_keyring 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 625938 ']' 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 625938 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 625938 ']' 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 625938 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 625938 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 625938' 00:18:14.682 killing process with pid 625938 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 625938 00:18:14.682 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 625938 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.941 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:17.480 00:18:17.480 real 0m6.725s 00:18:17.480 user 0m11.305s 00:18:17.480 sys 0m2.617s 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.480 ************************************ 00:18:17.480 END TEST nvmf_bdevio_no_huge 00:18:17.480 ************************************ 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:17.480 ************************************ 00:18:17.480 START TEST nvmf_tls 00:18:17.480 ************************************ 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:17.480 * Looking for test storage... 00:18:17.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:17.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.480 --rc genhtml_branch_coverage=1 00:18:17.480 --rc genhtml_function_coverage=1 00:18:17.480 --rc genhtml_legend=1 00:18:17.480 --rc geninfo_all_blocks=1 00:18:17.480 --rc geninfo_unexecuted_blocks=1 00:18:17.480 00:18:17.480 ' 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:17.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.480 --rc genhtml_branch_coverage=1 00:18:17.480 --rc genhtml_function_coverage=1 00:18:17.480 --rc genhtml_legend=1 00:18:17.480 --rc geninfo_all_blocks=1 00:18:17.480 --rc geninfo_unexecuted_blocks=1 00:18:17.480 00:18:17.480 ' 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:17.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.480 --rc genhtml_branch_coverage=1 00:18:17.480 --rc genhtml_function_coverage=1 00:18:17.480 --rc genhtml_legend=1 00:18:17.480 --rc geninfo_all_blocks=1 00:18:17.480 --rc geninfo_unexecuted_blocks=1 00:18:17.480 00:18:17.480 ' 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:17.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.480 --rc genhtml_branch_coverage=1 00:18:17.480 --rc genhtml_function_coverage=1 00:18:17.480 --rc genhtml_legend=1 00:18:17.480 --rc geninfo_all_blocks=1 00:18:17.480 --rc geninfo_unexecuted_blocks=1 00:18:17.480 00:18:17.480 ' 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.480 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:17.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:17.481 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.386 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:19.387 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:19.387 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:19.387 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:19.387 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.387 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:19.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:18:19.387 00:18:19.387 --- 10.0.0.2 ping statistics --- 00:18:19.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.387 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:18:19.387 00:18:19.387 --- 10.0.0.1 ping statistics --- 00:18:19.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.387 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.387 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=628166 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 628166 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 628166 ']' 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:19.646 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.646 [2024-10-30 12:29:52.148296] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:18:19.646 [2024-10-30 12:29:52.148393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.646 [2024-10-30 12:29:52.221987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.646 [2024-10-30 12:29:52.281012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.647 [2024-10-30 12:29:52.281074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.647 [2024-10-30 12:29:52.281087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.647 [2024-10-30 12:29:52.281097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.647 [2024-10-30 12:29:52.281106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.647 [2024-10-30 12:29:52.281754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.905 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:19.905 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:19.905 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.905 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.905 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.905 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.905 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:19.905 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:20.164 true 00:18:20.164 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.164 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:20.422 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:20.422 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:20.422 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:20.679 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.679 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:20.937 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:20.937 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:20.937 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:21.195 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.195 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:21.453 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:21.453 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:21.453 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.453 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:21.712 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:21.712 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:21.712 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:21.970 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.970 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:22.228 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:22.228 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:22.228 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:22.794 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:22.794 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ZobvmFTjVc 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.2lE7fT9E4u 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZobvmFTjVc 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.2lE7fT9E4u 00:18:23.053 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:23.311 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:23.569 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ZobvmFTjVc 00:18:23.569 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZobvmFTjVc 00:18:23.569 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:23.828 [2024-10-30 12:29:56.485130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.828 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:24.087 12:29:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:24.653 [2024-10-30 12:29:57.054651] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.653 [2024-10-30 12:29:57.054902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.653 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:24.911 malloc0 00:18:24.911 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:25.169 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZobvmFTjVc 00:18:25.426 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:25.685 12:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZobvmFTjVc 00:18:35.674 Initializing NVMe Controllers 00:18:35.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:35.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:35.674 Initialization complete. Launching workers. 00:18:35.674 ======================================================== 00:18:35.675 Latency(us) 00:18:35.675 Device Information : IOPS MiB/s Average min max 00:18:35.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8624.90 33.69 7422.58 1092.99 8710.00 00:18:35.675 ======================================================== 00:18:35.675 Total : 8624.90 33.69 7422.58 1092.99 8710.00 00:18:35.675 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZobvmFTjVc 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZobvmFTjVc 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=630567 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 630567 /var/tmp/bdevperf.sock 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 630567 ']' 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:35.933 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.933 [2024-10-30 12:30:08.403665] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:18:35.933 [2024-10-30 12:30:08.403735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630567 ] 00:18:35.933 [2024-10-30 12:30:08.469187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.933 [2024-10-30 12:30:08.526478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.191 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:36.191 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:36.191 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZobvmFTjVc 00:18:36.450 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.707 [2024-10-30 12:30:09.216403] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.708 TLSTESTn1 00:18:36.708 12:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:36.965 Running I/O for 10 seconds... 00:18:38.828 3453.00 IOPS, 13.49 MiB/s [2024-10-30T11:30:12.442Z] 3534.00 IOPS, 13.80 MiB/s [2024-10-30T11:30:13.814Z] 3522.33 IOPS, 13.76 MiB/s [2024-10-30T11:30:14.748Z] 3549.75 IOPS, 13.87 MiB/s [2024-10-30T11:30:15.677Z] 3550.20 IOPS, 13.87 MiB/s [2024-10-30T11:30:16.608Z] 3537.50 IOPS, 13.82 MiB/s [2024-10-30T11:30:17.539Z] 3533.71 IOPS, 13.80 MiB/s [2024-10-30T11:30:18.474Z] 3537.62 IOPS, 13.82 MiB/s [2024-10-30T11:30:19.854Z] 3541.33 IOPS, 13.83 MiB/s [2024-10-30T11:30:19.854Z] 3534.20 IOPS, 13.81 MiB/s 00:18:47.173 Latency(us) 00:18:47.173 [2024-10-30T11:30:19.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.173 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:47.173 Verification LBA range: start 0x0 length 0x2000 00:18:47.173 TLSTESTn1 : 10.03 3537.97 13.82 0.00 0.00 36110.75 7864.32 40583.77 00:18:47.173 [2024-10-30T11:30:19.854Z] =================================================================================================================== 00:18:47.173 [2024-10-30T11:30:19.854Z] Total : 3537.97 13.82 0.00 0.00 36110.75 7864.32 40583.77 00:18:47.173 { 00:18:47.173 "results": [ 00:18:47.173 { 00:18:47.173 "job": "TLSTESTn1", 00:18:47.173 "core_mask": "0x4", 00:18:47.173 "workload": "verify", 00:18:47.173 "status": "finished", 00:18:47.173 "verify_range": { 00:18:47.173 "start": 0, 00:18:47.173 "length": 8192 00:18:47.173 }, 00:18:47.173 "queue_depth": 128, 00:18:47.173 "io_size": 4096, 00:18:47.173 "runtime": 10.025228, 00:18:47.173 "iops": 3537.9743981882507, 00:18:47.173 "mibps": 13.820212492922854, 00:18:47.173 "io_failed": 0, 00:18:47.173 "io_timeout": 0, 00:18:47.173 "avg_latency_us": 36110.75186891422, 00:18:47.173 "min_latency_us": 7864.32, 00:18:47.173 "max_latency_us": 40583.77481481482 00:18:47.173 } 00:18:47.173 ], 00:18:47.173 "core_count": 1 00:18:47.173 } 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 630567 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 630567 ']' 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 630567 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 630567 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 630567' 00:18:47.173 killing process with pid 630567 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 630567 00:18:47.173 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.173 00:18:47.173 Latency(us) 00:18:47.173 [2024-10-30T11:30:19.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.173 [2024-10-30T11:30:19.854Z] =================================================================================================================== 00:18:47.173 [2024-10-30T11:30:19.854Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 630567 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2lE7fT9E4u 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2lE7fT9E4u 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2lE7fT9E4u 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2lE7fT9E4u 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=632009 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 632009 /var/tmp/bdevperf.sock 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 632009 ']' 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.173 12:30:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.173 [2024-10-30 12:30:19.788816] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:18:47.173 [2024-10-30 12:30:19.788913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632009 ] 00:18:47.173 [2024-10-30 12:30:19.855190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.430 [2024-10-30 12:30:19.910630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.430 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:47.430 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:47.430 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2lE7fT9E4u 00:18:47.688 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.947 [2024-10-30 12:30:20.562283] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.947 [2024-10-30 12:30:20.568029] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:47.947 [2024-10-30 12:30:20.568583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11332c0 (107): Transport endpoint is not connected 00:18:47.947 [2024-10-30 12:30:20.569574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11332c0 (9): Bad file descriptor 00:18:47.947 [2024-10-30 12:30:20.570573] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:47.947 [2024-10-30 12:30:20.570616] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:47.947 [2024-10-30 12:30:20.570631] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:47.947 [2024-10-30 12:30:20.570651] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:47.947 request: 00:18:47.947 { 00:18:47.947 "name": "TLSTEST", 00:18:47.947 "trtype": "tcp", 00:18:47.947 "traddr": "10.0.0.2", 00:18:47.947 "adrfam": "ipv4", 00:18:47.947 "trsvcid": "4420", 00:18:47.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.947 "prchk_reftag": false, 00:18:47.947 "prchk_guard": false, 00:18:47.947 "hdgst": false, 00:18:47.947 "ddgst": false, 00:18:47.947 "psk": "key0", 00:18:47.947 "allow_unrecognized_csi": false, 00:18:47.947 "method": "bdev_nvme_attach_controller", 00:18:47.947 "req_id": 1 00:18:47.947 } 00:18:47.947 Got JSON-RPC error response 00:18:47.947 response: 00:18:47.947 { 00:18:47.947 "code": -5, 00:18:47.947 "message": "Input/output error" 00:18:47.947 } 00:18:47.947 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 632009 00:18:47.947 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 632009 ']' 00:18:47.947 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 632009 00:18:47.947 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:47.947 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:47.947 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 632009 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 632009' 00:18:48.206 killing process with pid 632009 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 632009 00:18:48.206 Received shutdown signal, test time was about 10.000000 seconds 00:18:48.206 00:18:48.206 Latency(us) 00:18:48.206 [2024-10-30T11:30:20.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.206 [2024-10-30T11:30:20.887Z] =================================================================================================================== 00:18:48.206 [2024-10-30T11:30:20.887Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 632009 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZobvmFTjVc 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZobvmFTjVc 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZobvmFTjVc 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZobvmFTjVc 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=632164 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 632164 /var/tmp/bdevperf.sock 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 632164 ']' 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:48.206 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.494 [2024-10-30 12:30:20.901732] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:18:48.494 [2024-10-30 12:30:20.901836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632164 ] 00:18:48.494 [2024-10-30 12:30:20.969251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.494 [2024-10-30 12:30:21.027274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.494 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:48.494 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:48.494 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZobvmFTjVc 00:18:48.779 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:49.043 [2024-10-30 12:30:21.676533] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.043 [2024-10-30 12:30:21.682155] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:49.043 [2024-10-30 12:30:21.682187] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:49.043 [2024-10-30 12:30:21.682226] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:49.043 [2024-10-30 12:30:21.682762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ca2c0 (107): Transport endpoint is not connected 00:18:49.043 [2024-10-30 12:30:21.683751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ca2c0 (9): Bad file descriptor 00:18:49.043 [2024-10-30 12:30:21.684749] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:49.043 [2024-10-30 12:30:21.684770] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:49.043 [2024-10-30 12:30:21.684784] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:49.043 [2024-10-30 12:30:21.684802] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:49.043 request: 00:18:49.043 { 00:18:49.043 "name": "TLSTEST", 00:18:49.043 "trtype": "tcp", 00:18:49.043 "traddr": "10.0.0.2", 00:18:49.043 "adrfam": "ipv4", 00:18:49.043 "trsvcid": "4420", 00:18:49.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.043 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:49.043 "prchk_reftag": false, 00:18:49.043 "prchk_guard": false, 00:18:49.043 "hdgst": false, 00:18:49.043 "ddgst": false, 00:18:49.043 "psk": "key0", 00:18:49.043 "allow_unrecognized_csi": false, 00:18:49.043 "method": "bdev_nvme_attach_controller", 00:18:49.043 "req_id": 1 00:18:49.043 } 00:18:49.043 Got JSON-RPC error response 00:18:49.043 response: 00:18:49.043 { 00:18:49.043 "code": -5, 00:18:49.043 "message": "Input/output error" 00:18:49.043 } 00:18:49.043 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 632164 00:18:49.043 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 632164 ']' 00:18:49.043 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 632164 00:18:49.043 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:49.043 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:49.043 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 632164 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 632164' 00:18:49.302 killing process with pid 632164 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 632164 00:18:49.302 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.302 00:18:49.302 Latency(us) 00:18:49.302 [2024-10-30T11:30:21.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.302 [2024-10-30T11:30:21.983Z] =================================================================================================================== 00:18:49.302 [2024-10-30T11:30:21.983Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 632164 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZobvmFTjVc 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZobvmFTjVc 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZobvmFTjVc 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZobvmFTjVc 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=632306 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 632306 /var/tmp/bdevperf.sock 00:18:49.302 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 632306 ']' 00:18:49.303 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.303 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:49.303 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.303 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:49.303 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 [2024-10-30 12:30:22.013886] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:18:49.562 [2024-10-30 12:30:22.013978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632306 ] 00:18:49.562 [2024-10-30 12:30:22.080610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.562 [2024-10-30 12:30:22.135421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.562 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:49.562 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:49.562 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZobvmFTjVc 00:18:50.127 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:50.127 [2024-10-30 12:30:22.775149] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.127 [2024-10-30 12:30:22.783056] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:50.127 [2024-10-30 12:30:22.783086] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:50.127 [2024-10-30 12:30:22.783140] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:50.127 [2024-10-30 12:30:22.783308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f52c0 (107): Transport endpoint is not connected 00:18:50.127 [2024-10-30 12:30:22.784312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f52c0 (9): Bad file descriptor 00:18:50.127 [2024-10-30 12:30:22.785298] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:50.127 [2024-10-30 12:30:22.785319] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:50.127 [2024-10-30 12:30:22.785333] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:50.127 [2024-10-30 12:30:22.785352] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:50.128 request: 00:18:50.128 { 00:18:50.128 "name": "TLSTEST", 00:18:50.128 "trtype": "tcp", 00:18:50.128 "traddr": "10.0.0.2", 00:18:50.128 "adrfam": "ipv4", 00:18:50.128 "trsvcid": "4420", 00:18:50.128 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:50.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.128 "prchk_reftag": false, 00:18:50.128 "prchk_guard": false, 00:18:50.128 "hdgst": false, 00:18:50.128 "ddgst": false, 00:18:50.128 "psk": "key0", 00:18:50.128 "allow_unrecognized_csi": false, 00:18:50.128 "method": "bdev_nvme_attach_controller", 00:18:50.128 "req_id": 1 00:18:50.128 } 00:18:50.128 Got JSON-RPC error response 00:18:50.128 response: 00:18:50.128 { 00:18:50.128 "code": -5, 00:18:50.128 "message": "Input/output error" 00:18:50.128 } 00:18:50.128 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 632306 00:18:50.128 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 632306 ']' 00:18:50.128 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 632306 00:18:50.128 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:50.128 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:50.386 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 632306 00:18:50.386 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:50.386 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:50.386 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 632306' 00:18:50.386 killing process with pid 632306 00:18:50.386 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 632306 00:18:50.386 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.386 00:18:50.386 Latency(us) 00:18:50.386 [2024-10-30T11:30:23.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.386 [2024-10-30T11:30:23.067Z] =================================================================================================================== 00:18:50.386 [2024-10-30T11:30:23.067Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:50.386 12:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 632306 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=632449 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.386 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.645 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 632449 /var/tmp/bdevperf.sock 00:18:50.645 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 632449 ']' 00:18:50.645 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.645 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:50.645 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.645 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:50.645 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.645 [2024-10-30 12:30:23.115792] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:18:50.645 [2024-10-30 12:30:23.115886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632449 ] 00:18:50.645 [2024-10-30 12:30:23.181948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.645 [2024-10-30 12:30:23.237050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.903 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:50.903 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:50.903 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:51.161 [2024-10-30 12:30:23.609826] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:51.161 [2024-10-30 12:30:23.609873] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:51.161 request: 00:18:51.161 { 00:18:51.161 "name": "key0", 00:18:51.161 "path": "", 00:18:51.161 "method": "keyring_file_add_key", 00:18:51.161 "req_id": 1 00:18:51.161 } 00:18:51.161 Got JSON-RPC error response 00:18:51.161 response: 00:18:51.161 { 00:18:51.161 "code": -1, 00:18:51.161 "message": "Operation not permitted" 00:18:51.161 } 00:18:51.161 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.419 [2024-10-30 12:30:23.878694] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.419 [2024-10-30 12:30:23.878754] bdev_nvme.c:6530:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:51.419 request: 00:18:51.419 { 00:18:51.419 "name": "TLSTEST", 00:18:51.419 "trtype": "tcp", 00:18:51.419 "traddr": "10.0.0.2", 00:18:51.419 "adrfam": "ipv4", 00:18:51.419 "trsvcid": "4420", 00:18:51.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.419 "prchk_reftag": false, 00:18:51.419 "prchk_guard": false, 00:18:51.419 "hdgst": false, 00:18:51.419 "ddgst": false, 00:18:51.419 "psk": "key0", 00:18:51.419 "allow_unrecognized_csi": false, 00:18:51.419 "method": "bdev_nvme_attach_controller", 00:18:51.419 "req_id": 1 00:18:51.419 } 00:18:51.419 Got JSON-RPC error response 00:18:51.419 response: 00:18:51.419 { 00:18:51.419 "code": -126, 00:18:51.419 "message": "Required key not available" 00:18:51.419 } 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 632449 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 632449 ']' 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 632449 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 632449 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 632449' 00:18:51.419 killing process with pid 632449 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 632449 00:18:51.419 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.419 00:18:51.419 Latency(us) 00:18:51.419 [2024-10-30T11:30:24.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.419 [2024-10-30T11:30:24.100Z] =================================================================================================================== 00:18:51.419 [2024-10-30T11:30:24.100Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:51.419 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 632449 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 628166 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 628166 ']' 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 628166 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 628166 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 628166' 00:18:51.700 killing process with pid 628166 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 628166 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 628166 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:51.700 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.9YvOSgRkpn 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.9YvOSgRkpn 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=632601 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 632601 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 632601 ']' 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:51.958 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.958 [2024-10-30 12:30:24.466916] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:18:51.958 [2024-10-30 12:30:24.467000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.958 [2024-10-30 12:30:24.538006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.958 [2024-10-30 12:30:24.595698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.958 [2024-10-30 12:30:24.595760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.958 [2024-10-30 12:30:24.595787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.958 [2024-10-30 12:30:24.595798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.958 [2024-10-30 12:30:24.595808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.958 [2024-10-30 12:30:24.596441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.217 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:52.217 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:52.217 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:52.217 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:52.217 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.217 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.217 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.9YvOSgRkpn 00:18:52.217 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9YvOSgRkpn 00:18:52.217 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:52.475 [2024-10-30 12:30:24.989993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.475 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:52.734 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:52.991 [2024-10-30 12:30:25.531477] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:52.991 [2024-10-30 12:30:25.531762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.991 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:53.249 malloc0 00:18:53.249 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:53.507 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9YvOSgRkpn 00:18:53.764 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9YvOSgRkpn 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9YvOSgRkpn 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=632892 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 632892 /var/tmp/bdevperf.sock 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 632892 ']' 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.022 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.279 [2024-10-30 12:30:26.739899] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:18:54.279 [2024-10-30 12:30:26.739971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632892 ] 00:18:54.279 [2024-10-30 12:30:26.804382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.279 [2024-10-30 12:30:26.861172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.537 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.537 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:54.537 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9YvOSgRkpn 00:18:54.794 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:55.051 [2024-10-30 12:30:27.536673] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.051 TLSTESTn1 00:18:55.051 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:55.051 Running I/O for 10 seconds... 00:18:57.358 3201.00 IOPS, 12.50 MiB/s [2024-10-30T11:30:30.973Z] 3248.00 IOPS, 12.69 MiB/s [2024-10-30T11:30:31.906Z] 3225.67 IOPS, 12.60 MiB/s [2024-10-30T11:30:32.841Z] 3242.75 IOPS, 12.67 MiB/s [2024-10-30T11:30:33.776Z] 3233.60 IOPS, 12.63 MiB/s [2024-10-30T11:30:35.153Z] 3239.00 IOPS, 12.65 MiB/s [2024-10-30T11:30:36.087Z] 3245.71 IOPS, 12.68 MiB/s [2024-10-30T11:30:37.023Z] 3248.88 IOPS, 12.69 MiB/s [2024-10-30T11:30:37.957Z] 3242.44 IOPS, 12.67 MiB/s [2024-10-30T11:30:37.957Z] 3243.30 IOPS, 12.67 MiB/s 00:19:05.276 Latency(us) 00:19:05.276 [2024-10-30T11:30:37.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.276 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:05.276 Verification LBA range: start 0x0 length 0x2000 00:19:05.276 TLSTESTn1 : 10.02 3249.82 12.69 0.00 0.00 39319.70 7573.05 39807.05 00:19:05.276 [2024-10-30T11:30:37.957Z] =================================================================================================================== 00:19:05.276 [2024-10-30T11:30:37.957Z] Total : 3249.82 12.69 0.00 0.00 39319.70 7573.05 39807.05 00:19:05.276 { 00:19:05.276 "results": [ 00:19:05.276 { 00:19:05.276 "job": "TLSTESTn1", 00:19:05.276 "core_mask": "0x4", 00:19:05.276 "workload": "verify", 00:19:05.276 "status": "finished", 00:19:05.276 "verify_range": { 00:19:05.276 "start": 0, 00:19:05.276 "length": 8192 00:19:05.276 }, 00:19:05.276 "queue_depth": 128, 00:19:05.276 "io_size": 4096, 00:19:05.276 "runtime": 10.01839, 00:19:05.276 "iops": 3249.823574446593, 00:19:05.276 "mibps": 12.694623337682003, 00:19:05.276 "io_failed": 0, 00:19:05.276 "io_timeout": 0, 00:19:05.276 "avg_latency_us": 39319.70116428118, 00:19:05.276 "min_latency_us": 7573.0488888888885, 00:19:05.276 "max_latency_us": 39807.05185185185 00:19:05.276 } 00:19:05.276 ], 00:19:05.276 "core_count": 1 00:19:05.276 } 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 632892 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 632892 ']' 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 632892 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 632892 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 632892' 00:19:05.276 killing process with pid 632892 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 632892 00:19:05.276 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.276 00:19:05.276 Latency(us) 00:19:05.276 [2024-10-30T11:30:37.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.276 [2024-10-30T11:30:37.957Z] =================================================================================================================== 00:19:05.276 [2024-10-30T11:30:37.957Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.276 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 632892 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.9YvOSgRkpn 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9YvOSgRkpn 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9YvOSgRkpn 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9YvOSgRkpn 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9YvOSgRkpn 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=634216 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.534 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.535 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 634216 /var/tmp/bdevperf.sock 00:19:05.535 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 634216 ']' 00:19:05.535 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.535 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:05.535 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.535 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:05.535 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.535 [2024-10-30 12:30:38.109046] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:05.535 [2024-10-30 12:30:38.109139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634216 ] 00:19:05.535 [2024-10-30 12:30:38.175130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.792 [2024-10-30 12:30:38.236856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.792 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:05.792 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:05.792 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9YvOSgRkpn 00:19:06.049 [2024-10-30 12:30:38.602581] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9YvOSgRkpn': 0100666 00:19:06.049 [2024-10-30 12:30:38.602634] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:06.049 request: 00:19:06.049 { 00:19:06.049 "name": "key0", 00:19:06.049 "path": "/tmp/tmp.9YvOSgRkpn", 00:19:06.049 "method": "keyring_file_add_key", 00:19:06.049 "req_id": 1 00:19:06.049 } 00:19:06.049 Got JSON-RPC error response 00:19:06.049 response: 00:19:06.049 { 00:19:06.049 "code": -1, 00:19:06.049 "message": "Operation not permitted" 00:19:06.049 } 00:19:06.049 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:06.307 [2024-10-30 12:30:38.867383] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.307 [2024-10-30 12:30:38.867441] bdev_nvme.c:6530:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:06.307 request: 00:19:06.307 { 00:19:06.307 "name": "TLSTEST", 00:19:06.307 "trtype": "tcp", 00:19:06.307 "traddr": "10.0.0.2", 00:19:06.307 "adrfam": "ipv4", 00:19:06.307 "trsvcid": "4420", 00:19:06.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.307 "prchk_reftag": false, 00:19:06.307 "prchk_guard": false, 00:19:06.307 "hdgst": false, 00:19:06.307 "ddgst": false, 00:19:06.307 "psk": "key0", 00:19:06.307 "allow_unrecognized_csi": false, 00:19:06.307 "method": "bdev_nvme_attach_controller", 00:19:06.307 "req_id": 1 00:19:06.307 } 00:19:06.307 Got JSON-RPC error response 00:19:06.307 response: 00:19:06.307 { 00:19:06.307 "code": -126, 00:19:06.307 "message": "Required key not available" 00:19:06.307 } 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 634216 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 634216 ']' 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 634216 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 634216 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 634216' 00:19:06.307 killing process with pid 634216 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 634216 00:19:06.307 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.307 00:19:06.307 Latency(us) 00:19:06.307 [2024-10-30T11:30:38.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.307 [2024-10-30T11:30:38.988Z] =================================================================================================================== 00:19:06.307 [2024-10-30T11:30:38.988Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:06.307 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 634216 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 632601 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 632601 ']' 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 632601 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 632601 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 632601' 00:19:06.565 killing process with pid 632601 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 632601 00:19:06.565 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 632601 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=634446 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 634446 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 634446 ']' 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:06.824 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.824 [2024-10-30 12:30:39.469518] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:06.824 [2024-10-30 12:30:39.469636] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.083 [2024-10-30 12:30:39.540964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.083 [2024-10-30 12:30:39.592762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.083 [2024-10-30 12:30:39.592836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.083 [2024-10-30 12:30:39.592859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.083 [2024-10-30 12:30:39.592869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.083 [2024-10-30 12:30:39.592878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.083 [2024-10-30 12:30:39.593435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.9YvOSgRkpn 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9YvOSgRkpn 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.9YvOSgRkpn 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9YvOSgRkpn 00:19:07.083 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:07.352 [2024-10-30 12:30:39.978953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.352 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:07.612 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:07.869 [2024-10-30 12:30:40.548497] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.869 [2024-10-30 12:30:40.548787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.127 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:08.385 malloc0 00:19:08.385 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:08.643 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9YvOSgRkpn 00:19:08.902 [2024-10-30 12:30:41.365143] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9YvOSgRkpn': 0100666 00:19:08.902 [2024-10-30 12:30:41.365184] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:08.902 request: 00:19:08.902 { 00:19:08.902 "name": "key0", 00:19:08.902 "path": "/tmp/tmp.9YvOSgRkpn", 00:19:08.902 "method": "keyring_file_add_key", 00:19:08.902 "req_id": 1 00:19:08.902 } 00:19:08.902 Got JSON-RPC error response 00:19:08.902 response: 00:19:08.902 { 00:19:08.902 "code": -1, 00:19:08.902 "message": "Operation not permitted" 00:19:08.902 } 00:19:08.902 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:09.160 [2024-10-30 12:30:41.637929] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:09.160 [2024-10-30 12:30:41.637998] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:09.160 request: 00:19:09.160 { 00:19:09.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.160 "host": "nqn.2016-06.io.spdk:host1", 00:19:09.160 "psk": "key0", 00:19:09.160 "method": "nvmf_subsystem_add_host", 00:19:09.160 "req_id": 1 00:19:09.160 } 00:19:09.160 Got JSON-RPC error response 00:19:09.160 response: 00:19:09.160 { 00:19:09.160 "code": -32603, 00:19:09.160 "message": "Internal error" 00:19:09.160 } 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 634446 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 634446 ']' 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 634446 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 634446 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 634446' 00:19:09.160 killing process with pid 634446 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 634446 00:19:09.160 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 634446 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.9YvOSgRkpn 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=634776 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 634776 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 634776 ']' 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:09.418 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.418 [2024-10-30 12:30:41.990793] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:09.418 [2024-10-30 12:30:41.990878] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.418 [2024-10-30 12:30:42.064254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.676 [2024-10-30 12:30:42.121131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.676 [2024-10-30 12:30:42.121196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.676 [2024-10-30 12:30:42.121211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.676 [2024-10-30 12:30:42.121223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.676 [2024-10-30 12:30:42.121232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.676 [2024-10-30 12:30:42.121809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.676 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:09.676 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:09.676 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.676 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:09.676 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.676 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.676 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.9YvOSgRkpn 00:19:09.676 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9YvOSgRkpn 00:19:09.676 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:09.934 [2024-10-30 12:30:42.495965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.934 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:10.192 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:10.451 [2024-10-30 12:30:43.129688] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.451 [2024-10-30 12:30:43.129936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.709 12:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:10.967 malloc0 00:19:10.968 12:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:11.226 12:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9YvOSgRkpn 00:19:11.483 12:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:11.743 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=635064 00:19:11.743 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:11.743 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:11.743 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 635064 /var/tmp/bdevperf.sock 00:19:11.743 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 635064 ']' 00:19:11.743 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.743 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:11.743 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.743 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:11.743 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.743 [2024-10-30 12:30:44.272658] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:11.743 [2024-10-30 12:30:44.272749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635064 ] 00:19:11.743 [2024-10-30 12:30:44.340445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.743 [2024-10-30 12:30:44.401776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.002 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:12.002 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:12.002 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9YvOSgRkpn 00:19:12.260 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.519 [2024-10-30 12:30:45.065160] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.519 TLSTESTn1 00:19:12.519 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:13.085 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:13.085 "subsystems": [ 00:19:13.085 { 00:19:13.085 "subsystem": "keyring", 00:19:13.085 "config": [ 00:19:13.085 { 00:19:13.085 "method": "keyring_file_add_key", 00:19:13.085 "params": { 00:19:13.085 "name": "key0", 00:19:13.085 "path": "/tmp/tmp.9YvOSgRkpn" 00:19:13.085 } 00:19:13.085 } 00:19:13.085 ] 00:19:13.085 }, 00:19:13.085 { 00:19:13.085 "subsystem": "iobuf", 00:19:13.085 "config": [ 00:19:13.085 { 00:19:13.085 "method": "iobuf_set_options", 00:19:13.085 "params": { 00:19:13.085 "small_pool_count": 8192, 00:19:13.085 "large_pool_count": 1024, 00:19:13.085 "small_bufsize": 8192, 00:19:13.085 "large_bufsize": 135168, 00:19:13.085 "enable_numa": false 00:19:13.085 } 00:19:13.085 } 00:19:13.085 ] 00:19:13.085 }, 00:19:13.085 { 00:19:13.085 "subsystem": "sock", 00:19:13.085 "config": [ 00:19:13.085 { 00:19:13.085 "method": "sock_set_default_impl", 00:19:13.085 "params": { 00:19:13.085 "impl_name": "posix" 00:19:13.085 } 00:19:13.085 }, 00:19:13.085 { 00:19:13.085 "method": "sock_impl_set_options", 00:19:13.085 "params": { 00:19:13.085 "impl_name": "ssl", 00:19:13.085 "recv_buf_size": 4096, 00:19:13.085 "send_buf_size": 4096, 00:19:13.085 "enable_recv_pipe": true, 00:19:13.085 "enable_quickack": false, 00:19:13.085 "enable_placement_id": 0, 00:19:13.085 "enable_zerocopy_send_server": true, 00:19:13.085 "enable_zerocopy_send_client": false, 00:19:13.085 "zerocopy_threshold": 0, 00:19:13.085 "tls_version": 0, 00:19:13.085 "enable_ktls": false 00:19:13.085 } 00:19:13.085 }, 00:19:13.085 { 00:19:13.085 "method": "sock_impl_set_options", 00:19:13.085 "params": { 00:19:13.085 "impl_name": "posix", 00:19:13.085 "recv_buf_size": 2097152, 00:19:13.085 "send_buf_size": 2097152, 00:19:13.085 "enable_recv_pipe": true, 00:19:13.085 "enable_quickack": false, 00:19:13.085 "enable_placement_id": 0, 00:19:13.085 "enable_zerocopy_send_server": true, 00:19:13.085 "enable_zerocopy_send_client": false, 00:19:13.085 "zerocopy_threshold": 0, 00:19:13.085 "tls_version": 0, 00:19:13.085 "enable_ktls": false 00:19:13.085 } 00:19:13.085 } 00:19:13.085 ] 00:19:13.085 }, 00:19:13.085 { 00:19:13.085 "subsystem": "vmd", 00:19:13.085 "config": [] 00:19:13.085 }, 00:19:13.085 { 00:19:13.085 "subsystem": "accel", 00:19:13.085 "config": [ 00:19:13.085 { 00:19:13.085 "method": "accel_set_options", 00:19:13.085 "params": { 00:19:13.085 "small_cache_size": 128, 00:19:13.085 "large_cache_size": 16, 00:19:13.085 "task_count": 2048, 00:19:13.085 "sequence_count": 2048, 00:19:13.085 "buf_count": 2048 00:19:13.085 } 00:19:13.085 } 00:19:13.085 ] 00:19:13.085 }, 00:19:13.085 { 00:19:13.085 "subsystem": "bdev", 00:19:13.086 "config": [ 00:19:13.086 { 00:19:13.086 "method": "bdev_set_options", 00:19:13.086 "params": { 00:19:13.086 "bdev_io_pool_size": 65535, 00:19:13.086 "bdev_io_cache_size": 256, 00:19:13.086 "bdev_auto_examine": true, 00:19:13.086 "iobuf_small_cache_size": 128, 00:19:13.086 "iobuf_large_cache_size": 16 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "bdev_raid_set_options", 00:19:13.086 "params": { 00:19:13.086 "process_window_size_kb": 1024, 00:19:13.086 "process_max_bandwidth_mb_sec": 0 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "bdev_iscsi_set_options", 00:19:13.086 "params": { 00:19:13.086 "timeout_sec": 30 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "bdev_nvme_set_options", 00:19:13.086 "params": { 00:19:13.086 "action_on_timeout": "none", 00:19:13.086 "timeout_us": 0, 00:19:13.086 "timeout_admin_us": 0, 00:19:13.086 "keep_alive_timeout_ms": 10000, 00:19:13.086 "arbitration_burst": 0, 00:19:13.086 "low_priority_weight": 0, 00:19:13.086 "medium_priority_weight": 0, 00:19:13.086 "high_priority_weight": 0, 00:19:13.086 "nvme_adminq_poll_period_us": 10000, 00:19:13.086 "nvme_ioq_poll_period_us": 0, 00:19:13.086 "io_queue_requests": 0, 00:19:13.086 "delay_cmd_submit": true, 00:19:13.086 "transport_retry_count": 4, 00:19:13.086 "bdev_retry_count": 3, 00:19:13.086 "transport_ack_timeout": 0, 00:19:13.086 "ctrlr_loss_timeout_sec": 0, 00:19:13.086 "reconnect_delay_sec": 0, 00:19:13.086 "fast_io_fail_timeout_sec": 0, 00:19:13.086 "disable_auto_failback": false, 00:19:13.086 "generate_uuids": false, 00:19:13.086 "transport_tos": 0, 00:19:13.086 "nvme_error_stat": false, 00:19:13.086 "rdma_srq_size": 0, 00:19:13.086 "io_path_stat": false, 00:19:13.086 "allow_accel_sequence": false, 00:19:13.086 "rdma_max_cq_size": 0, 00:19:13.086 "rdma_cm_event_timeout_ms": 0, 00:19:13.086 "dhchap_digests": [ 00:19:13.086 "sha256", 00:19:13.086 "sha384", 00:19:13.086 "sha512" 00:19:13.086 ], 00:19:13.086 "dhchap_dhgroups": [ 00:19:13.086 "null", 00:19:13.086 "ffdhe2048", 00:19:13.086 "ffdhe3072", 00:19:13.086 "ffdhe4096", 00:19:13.086 "ffdhe6144", 00:19:13.086 "ffdhe8192" 00:19:13.086 ] 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "bdev_nvme_set_hotplug", 00:19:13.086 "params": { 00:19:13.086 "period_us": 100000, 00:19:13.086 "enable": false 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "bdev_malloc_create", 00:19:13.086 "params": { 00:19:13.086 "name": "malloc0", 00:19:13.086 "num_blocks": 8192, 00:19:13.086 "block_size": 4096, 00:19:13.086 "physical_block_size": 4096, 00:19:13.086 "uuid": "3ea85188-eb96-4605-b8f9-84bdbc22d8c6", 00:19:13.086 "optimal_io_boundary": 0, 00:19:13.086 "md_size": 0, 00:19:13.086 "dif_type": 0, 00:19:13.086 "dif_is_head_of_md": false, 00:19:13.086 "dif_pi_format": 0 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "bdev_wait_for_examine" 00:19:13.086 } 00:19:13.086 ] 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "subsystem": "nbd", 00:19:13.086 "config": [] 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "subsystem": "scheduler", 00:19:13.086 "config": [ 00:19:13.086 { 00:19:13.086 "method": "framework_set_scheduler", 00:19:13.086 "params": { 00:19:13.086 "name": "static" 00:19:13.086 } 00:19:13.086 } 00:19:13.086 ] 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "subsystem": "nvmf", 00:19:13.086 "config": [ 00:19:13.086 { 00:19:13.086 "method": "nvmf_set_config", 00:19:13.086 "params": { 00:19:13.086 "discovery_filter": "match_any", 00:19:13.086 "admin_cmd_passthru": { 00:19:13.086 "identify_ctrlr": false 00:19:13.086 }, 00:19:13.086 "dhchap_digests": [ 00:19:13.086 "sha256", 00:19:13.086 "sha384", 00:19:13.086 "sha512" 00:19:13.086 ], 00:19:13.086 "dhchap_dhgroups": [ 00:19:13.086 "null", 00:19:13.086 "ffdhe2048", 00:19:13.086 "ffdhe3072", 00:19:13.086 "ffdhe4096", 00:19:13.086 "ffdhe6144", 00:19:13.086 "ffdhe8192" 00:19:13.086 ] 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "nvmf_set_max_subsystems", 00:19:13.086 "params": { 00:19:13.086 "max_subsystems": 1024 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "nvmf_set_crdt", 00:19:13.086 "params": { 00:19:13.086 "crdt1": 0, 00:19:13.086 "crdt2": 0, 00:19:13.086 "crdt3": 0 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "nvmf_create_transport", 00:19:13.086 "params": { 00:19:13.086 "trtype": "TCP", 00:19:13.086 "max_queue_depth": 128, 00:19:13.086 "max_io_qpairs_per_ctrlr": 127, 00:19:13.086 "in_capsule_data_size": 4096, 00:19:13.086 "max_io_size": 131072, 00:19:13.086 "io_unit_size": 131072, 00:19:13.086 "max_aq_depth": 128, 00:19:13.086 "num_shared_buffers": 511, 00:19:13.086 "buf_cache_size": 4294967295, 00:19:13.086 "dif_insert_or_strip": false, 00:19:13.086 "zcopy": false, 00:19:13.086 "c2h_success": false, 00:19:13.086 "sock_priority": 0, 00:19:13.086 "abort_timeout_sec": 1, 00:19:13.086 "ack_timeout": 0, 00:19:13.086 "data_wr_pool_size": 0 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "nvmf_create_subsystem", 00:19:13.086 "params": { 00:19:13.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.086 "allow_any_host": false, 00:19:13.086 "serial_number": "SPDK00000000000001", 00:19:13.086 "model_number": "SPDK bdev Controller", 00:19:13.086 "max_namespaces": 10, 00:19:13.086 "min_cntlid": 1, 00:19:13.086 "max_cntlid": 65519, 00:19:13.086 "ana_reporting": false 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "nvmf_subsystem_add_host", 00:19:13.086 "params": { 00:19:13.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.086 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.086 "psk": "key0" 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "nvmf_subsystem_add_ns", 00:19:13.086 "params": { 00:19:13.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.086 "namespace": { 00:19:13.086 "nsid": 1, 00:19:13.086 "bdev_name": "malloc0", 00:19:13.086 "nguid": "3EA85188EB964605B8F984BDBC22D8C6", 00:19:13.086 "uuid": "3ea85188-eb96-4605-b8f9-84bdbc22d8c6", 00:19:13.086 "no_auto_visible": false 00:19:13.086 } 00:19:13.086 } 00:19:13.086 }, 00:19:13.086 { 00:19:13.086 "method": "nvmf_subsystem_add_listener", 00:19:13.086 "params": { 00:19:13.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.086 "listen_address": { 00:19:13.086 "trtype": "TCP", 00:19:13.087 "adrfam": "IPv4", 00:19:13.087 "traddr": "10.0.0.2", 00:19:13.087 "trsvcid": "4420" 00:19:13.087 }, 00:19:13.087 "secure_channel": true 00:19:13.087 } 00:19:13.087 } 00:19:13.087 ] 00:19:13.087 } 00:19:13.087 ] 00:19:13.087 }' 00:19:13.087 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:13.345 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:13.345 "subsystems": [ 00:19:13.345 { 00:19:13.345 "subsystem": "keyring", 00:19:13.345 "config": [ 00:19:13.345 { 00:19:13.345 "method": "keyring_file_add_key", 00:19:13.345 "params": { 00:19:13.345 "name": "key0", 00:19:13.345 "path": "/tmp/tmp.9YvOSgRkpn" 00:19:13.345 } 00:19:13.345 } 00:19:13.345 ] 00:19:13.345 }, 00:19:13.345 { 00:19:13.345 "subsystem": "iobuf", 00:19:13.345 "config": [ 00:19:13.345 { 00:19:13.345 "method": "iobuf_set_options", 00:19:13.345 "params": { 00:19:13.345 "small_pool_count": 8192, 00:19:13.345 "large_pool_count": 1024, 00:19:13.345 "small_bufsize": 8192, 00:19:13.345 "large_bufsize": 135168, 00:19:13.345 "enable_numa": false 00:19:13.345 } 00:19:13.345 } 00:19:13.345 ] 00:19:13.345 }, 00:19:13.345 { 00:19:13.345 "subsystem": "sock", 00:19:13.345 "config": [ 00:19:13.345 { 00:19:13.345 "method": "sock_set_default_impl", 00:19:13.345 "params": { 00:19:13.345 "impl_name": "posix" 00:19:13.345 } 00:19:13.345 }, 00:19:13.345 { 00:19:13.345 "method": "sock_impl_set_options", 00:19:13.345 "params": { 00:19:13.345 "impl_name": "ssl", 00:19:13.345 "recv_buf_size": 4096, 00:19:13.345 "send_buf_size": 4096, 00:19:13.345 "enable_recv_pipe": true, 00:19:13.345 "enable_quickack": false, 00:19:13.345 "enable_placement_id": 0, 00:19:13.345 "enable_zerocopy_send_server": true, 00:19:13.345 "enable_zerocopy_send_client": false, 00:19:13.345 "zerocopy_threshold": 0, 00:19:13.345 "tls_version": 0, 00:19:13.345 "enable_ktls": false 00:19:13.345 } 00:19:13.345 }, 00:19:13.345 { 00:19:13.346 "method": "sock_impl_set_options", 00:19:13.346 "params": { 00:19:13.346 "impl_name": "posix", 00:19:13.346 "recv_buf_size": 2097152, 00:19:13.346 "send_buf_size": 2097152, 00:19:13.346 "enable_recv_pipe": true, 00:19:13.346 "enable_quickack": false, 00:19:13.346 "enable_placement_id": 0, 00:19:13.346 "enable_zerocopy_send_server": true, 00:19:13.346 "enable_zerocopy_send_client": false, 00:19:13.346 "zerocopy_threshold": 0, 00:19:13.346 "tls_version": 0, 00:19:13.346 "enable_ktls": false 00:19:13.346 } 00:19:13.346 } 00:19:13.346 ] 00:19:13.346 }, 00:19:13.346 { 00:19:13.346 "subsystem": "vmd", 00:19:13.346 "config": [] 00:19:13.346 }, 00:19:13.346 { 00:19:13.346 "subsystem": "accel", 00:19:13.346 "config": [ 00:19:13.346 { 00:19:13.346 "method": "accel_set_options", 00:19:13.346 "params": { 00:19:13.346 "small_cache_size": 128, 00:19:13.346 "large_cache_size": 16, 00:19:13.346 "task_count": 2048, 00:19:13.346 "sequence_count": 2048, 00:19:13.346 "buf_count": 2048 00:19:13.346 } 00:19:13.346 } 00:19:13.346 ] 00:19:13.346 }, 00:19:13.346 { 00:19:13.346 "subsystem": "bdev", 00:19:13.346 "config": [ 00:19:13.346 { 00:19:13.346 "method": "bdev_set_options", 00:19:13.346 "params": { 00:19:13.346 "bdev_io_pool_size": 65535, 00:19:13.346 "bdev_io_cache_size": 256, 00:19:13.346 "bdev_auto_examine": true, 00:19:13.346 "iobuf_small_cache_size": 128, 00:19:13.346 "iobuf_large_cache_size": 16 00:19:13.346 } 00:19:13.346 }, 00:19:13.346 { 00:19:13.346 "method": "bdev_raid_set_options", 00:19:13.346 "params": { 00:19:13.346 "process_window_size_kb": 1024, 00:19:13.346 "process_max_bandwidth_mb_sec": 0 00:19:13.346 } 00:19:13.346 }, 00:19:13.346 { 00:19:13.346 "method": "bdev_iscsi_set_options", 00:19:13.346 "params": { 00:19:13.346 "timeout_sec": 30 00:19:13.346 } 00:19:13.346 }, 00:19:13.346 { 00:19:13.346 "method": "bdev_nvme_set_options", 00:19:13.346 "params": { 00:19:13.346 "action_on_timeout": "none", 00:19:13.346 "timeout_us": 0, 00:19:13.346 "timeout_admin_us": 0, 00:19:13.346 "keep_alive_timeout_ms": 10000, 00:19:13.346 "arbitration_burst": 0, 00:19:13.346 "low_priority_weight": 0, 00:19:13.346 "medium_priority_weight": 0, 00:19:13.346 "high_priority_weight": 0, 00:19:13.346 "nvme_adminq_poll_period_us": 10000, 00:19:13.346 "nvme_ioq_poll_period_us": 0, 00:19:13.346 "io_queue_requests": 512, 00:19:13.346 "delay_cmd_submit": true, 00:19:13.346 "transport_retry_count": 4, 00:19:13.346 "bdev_retry_count": 3, 00:19:13.346 "transport_ack_timeout": 0, 00:19:13.346 "ctrlr_loss_timeout_sec": 0, 00:19:13.346 "reconnect_delay_sec": 0, 00:19:13.346 "fast_io_fail_timeout_sec": 0, 00:19:13.346 "disable_auto_failback": false, 00:19:13.346 "generate_uuids": false, 00:19:13.346 "transport_tos": 0, 00:19:13.346 "nvme_error_stat": false, 00:19:13.346 "rdma_srq_size": 0, 00:19:13.346 "io_path_stat": false, 00:19:13.346 "allow_accel_sequence": false, 00:19:13.346 "rdma_max_cq_size": 0, 00:19:13.346 "rdma_cm_event_timeout_ms": 0, 00:19:13.346 "dhchap_digests": [ 00:19:13.346 "sha256", 00:19:13.346 "sha384", 00:19:13.346 "sha512" 00:19:13.346 ], 00:19:13.346 "dhchap_dhgroups": [ 00:19:13.346 "null", 00:19:13.346 "ffdhe2048", 00:19:13.346 "ffdhe3072", 00:19:13.346 "ffdhe4096", 00:19:13.346 "ffdhe6144", 00:19:13.346 "ffdhe8192" 00:19:13.346 ] 00:19:13.346 } 00:19:13.346 }, 00:19:13.346 { 00:19:13.346 "method": "bdev_nvme_attach_controller", 00:19:13.346 "params": { 00:19:13.346 "name": "TLSTEST", 00:19:13.346 "trtype": "TCP", 00:19:13.346 "adrfam": "IPv4", 00:19:13.346 "traddr": "10.0.0.2", 00:19:13.346 "trsvcid": "4420", 00:19:13.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.346 "prchk_reftag": false, 00:19:13.346 "prchk_guard": false, 00:19:13.346 "ctrlr_loss_timeout_sec": 0, 00:19:13.346 "reconnect_delay_sec": 0, 00:19:13.346 "fast_io_fail_timeout_sec": 0, 00:19:13.346 "psk": "key0", 00:19:13.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.346 "hdgst": false, 00:19:13.346 "ddgst": false, 00:19:13.346 "multipath": "multipath" 00:19:13.346 } 00:19:13.346 }, 00:19:13.346 { 00:19:13.346 "method": "bdev_nvme_set_hotplug", 00:19:13.346 "params": { 00:19:13.346 "period_us": 100000, 00:19:13.346 "enable": false 00:19:13.346 } 00:19:13.346 }, 00:19:13.346 { 00:19:13.346 "method": "bdev_wait_for_examine" 00:19:13.346 } 00:19:13.346 ] 00:19:13.346 }, 00:19:13.346 { 00:19:13.346 "subsystem": "nbd", 00:19:13.346 "config": [] 00:19:13.346 } 00:19:13.346 ] 00:19:13.346 }' 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 635064 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 635064 ']' 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 635064 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 635064 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 635064' 00:19:13.346 killing process with pid 635064 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 635064 00:19:13.346 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.346 00:19:13.346 Latency(us) 00:19:13.346 [2024-10-30T11:30:46.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.346 [2024-10-30T11:30:46.027Z] =================================================================================================================== 00:19:13.346 [2024-10-30T11:30:46.027Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.346 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 635064 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 634776 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 634776 ']' 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 634776 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 634776 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 634776' 00:19:13.604 killing process with pid 634776 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 634776 00:19:13.604 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 634776 00:19:13.862 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:13.862 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:13.862 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:13.862 "subsystems": [ 00:19:13.862 { 00:19:13.862 "subsystem": "keyring", 00:19:13.862 "config": [ 00:19:13.862 { 00:19:13.862 "method": "keyring_file_add_key", 00:19:13.862 "params": { 00:19:13.862 "name": "key0", 00:19:13.862 "path": "/tmp/tmp.9YvOSgRkpn" 00:19:13.862 } 00:19:13.862 } 00:19:13.862 ] 00:19:13.862 }, 00:19:13.862 { 00:19:13.862 "subsystem": "iobuf", 00:19:13.863 "config": [ 00:19:13.863 { 00:19:13.863 "method": "iobuf_set_options", 00:19:13.863 "params": { 00:19:13.863 "small_pool_count": 8192, 00:19:13.863 "large_pool_count": 1024, 00:19:13.863 "small_bufsize": 8192, 00:19:13.863 "large_bufsize": 135168, 00:19:13.863 "enable_numa": false 00:19:13.863 } 00:19:13.863 } 00:19:13.863 ] 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "subsystem": "sock", 00:19:13.863 "config": [ 00:19:13.863 { 00:19:13.863 "method": "sock_set_default_impl", 00:19:13.863 "params": { 00:19:13.863 "impl_name": "posix" 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "sock_impl_set_options", 00:19:13.863 "params": { 00:19:13.863 "impl_name": "ssl", 00:19:13.863 "recv_buf_size": 4096, 00:19:13.863 "send_buf_size": 4096, 00:19:13.863 "enable_recv_pipe": true, 00:19:13.863 "enable_quickack": false, 00:19:13.863 "enable_placement_id": 0, 00:19:13.863 "enable_zerocopy_send_server": true, 00:19:13.863 "enable_zerocopy_send_client": false, 00:19:13.863 "zerocopy_threshold": 0, 00:19:13.863 "tls_version": 0, 00:19:13.863 "enable_ktls": false 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "sock_impl_set_options", 00:19:13.863 "params": { 00:19:13.863 "impl_name": "posix", 00:19:13.863 "recv_buf_size": 2097152, 00:19:13.863 "send_buf_size": 2097152, 00:19:13.863 "enable_recv_pipe": true, 00:19:13.863 "enable_quickack": false, 00:19:13.863 "enable_placement_id": 0, 00:19:13.863 "enable_zerocopy_send_server": true, 00:19:13.863 "enable_zerocopy_send_client": false, 00:19:13.863 "zerocopy_threshold": 0, 00:19:13.863 "tls_version": 0, 00:19:13.863 "enable_ktls": false 00:19:13.863 } 00:19:13.863 } 00:19:13.863 ] 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "subsystem": "vmd", 00:19:13.863 "config": [] 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "subsystem": "accel", 00:19:13.863 "config": [ 00:19:13.863 { 00:19:13.863 "method": "accel_set_options", 00:19:13.863 "params": { 00:19:13.863 "small_cache_size": 128, 00:19:13.863 "large_cache_size": 16, 00:19:13.863 "task_count": 2048, 00:19:13.863 "sequence_count": 2048, 00:19:13.863 "buf_count": 2048 00:19:13.863 } 00:19:13.863 } 00:19:13.863 ] 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "subsystem": "bdev", 00:19:13.863 "config": [ 00:19:13.863 { 00:19:13.863 "method": "bdev_set_options", 00:19:13.863 "params": { 00:19:13.863 "bdev_io_pool_size": 65535, 00:19:13.863 "bdev_io_cache_size": 256, 00:19:13.863 "bdev_auto_examine": true, 00:19:13.863 "iobuf_small_cache_size": 128, 00:19:13.863 "iobuf_large_cache_size": 16 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "bdev_raid_set_options", 00:19:13.863 "params": { 00:19:13.863 "process_window_size_kb": 1024, 00:19:13.863 "process_max_bandwidth_mb_sec": 0 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "bdev_iscsi_set_options", 00:19:13.863 "params": { 00:19:13.863 "timeout_sec": 30 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "bdev_nvme_set_options", 00:19:13.863 "params": { 00:19:13.863 "action_on_timeout": "none", 00:19:13.863 "timeout_us": 0, 00:19:13.863 "timeout_admin_us": 0, 00:19:13.863 "keep_alive_timeout_ms": 10000, 00:19:13.863 "arbitration_burst": 0, 00:19:13.863 "low_priority_weight": 0, 00:19:13.863 "medium_priority_weight": 0, 00:19:13.863 "high_priority_weight": 0, 00:19:13.863 "nvme_adminq_poll_period_us": 10000, 00:19:13.863 "nvme_ioq_poll_period_us": 0, 00:19:13.863 "io_queue_requests": 0, 00:19:13.863 "delay_cmd_submit": true, 00:19:13.863 "transport_retry_count": 4, 00:19:13.863 "bdev_retry_count": 3, 00:19:13.863 "transport_ack_timeout": 0, 00:19:13.863 "ctrlr_loss_timeout_sec": 0, 00:19:13.863 "reconnect_delay_sec": 0, 00:19:13.863 "fast_io_fail_timeout_sec": 0, 00:19:13.863 "disable_auto_failback": false, 00:19:13.863 "generate_uuids": false, 00:19:13.863 "transport_tos": 0, 00:19:13.863 "nvme_error_stat": false, 00:19:13.863 "rdma_srq_size": 0, 00:19:13.863 "io_path_stat": false, 00:19:13.863 "allow_accel_sequence": false, 00:19:13.863 "rdma_max_cq_size": 0, 00:19:13.863 "rdma_cm_event_timeout_ms": 0, 00:19:13.863 "dhchap_digests": [ 00:19:13.863 "sha256", 00:19:13.863 "sha384", 00:19:13.863 "sha512" 00:19:13.863 ], 00:19:13.863 "dhchap_dhgroups": [ 00:19:13.863 "null", 00:19:13.863 "ffdhe2048", 00:19:13.863 "ffdhe3072", 00:19:13.863 "ffdhe4096", 00:19:13.863 "ffdhe6144", 00:19:13.863 "ffdhe8192" 00:19:13.863 ] 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "bdev_nvme_set_hotplug", 00:19:13.863 "params": { 00:19:13.863 "period_us": 100000, 00:19:13.863 "enable": false 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "bdev_malloc_create", 00:19:13.863 "params": { 00:19:13.863 "name": "malloc0", 00:19:13.863 "num_blocks": 8192, 00:19:13.863 "block_size": 4096, 00:19:13.863 "physical_block_size": 4096, 00:19:13.863 "uuid": "3ea85188-eb96-4605-b8f9-84bdbc22d8c6", 00:19:13.863 "optimal_io_boundary": 0, 00:19:13.863 "md_size": 0, 00:19:13.863 "dif_type": 0, 00:19:13.863 "dif_is_head_of_md": false, 00:19:13.863 "dif_pi_format": 0 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "bdev_wait_for_examine" 00:19:13.863 } 00:19:13.863 ] 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "subsystem": "nbd", 00:19:13.863 "config": [] 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "subsystem": "scheduler", 00:19:13.863 "config": [ 00:19:13.863 { 00:19:13.863 "method": "framework_set_scheduler", 00:19:13.863 "params": { 00:19:13.863 "name": "static" 00:19:13.863 } 00:19:13.863 } 00:19:13.863 ] 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "subsystem": "nvmf", 00:19:13.863 "config": [ 00:19:13.863 { 00:19:13.863 "method": "nvmf_set_config", 00:19:13.863 "params": { 00:19:13.863 "discovery_filter": "match_any", 00:19:13.863 "admin_cmd_passthru": { 00:19:13.863 "identify_ctrlr": false 00:19:13.863 }, 00:19:13.863 "dhchap_digests": [ 00:19:13.863 "sha256", 00:19:13.863 "sha384", 00:19:13.863 "sha512" 00:19:13.863 ], 00:19:13.863 "dhchap_dhgroups": [ 00:19:13.863 "null", 00:19:13.863 "ffdhe2048", 00:19:13.863 "ffdhe3072", 00:19:13.863 "ffdhe4096", 00:19:13.863 "ffdhe6144", 00:19:13.863 "ffdhe8192" 00:19:13.863 ] 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "nvmf_set_max_subsystems", 00:19:13.863 "params": { 00:19:13.863 "max_subsystems": 1024 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "nvmf_set_crdt", 00:19:13.863 "params": { 00:19:13.863 "crdt1": 0, 00:19:13.863 "crdt2": 0, 00:19:13.863 "crdt3": 0 00:19:13.863 } 00:19:13.863 }, 00:19:13.863 { 00:19:13.863 "method": "nvmf_create_transport", 00:19:13.863 "params": { 00:19:13.863 "trtype": "TCP", 00:19:13.863 "max_queue_depth": 128, 00:19:13.863 "max_io_qpairs_per_ctrlr": 127, 00:19:13.863 "in_capsule_data_size": 4096, 00:19:13.863 "max_io_size": 131072, 00:19:13.863 "io_unit_size": 131072, 00:19:13.863 "max_aq_depth": 128, 00:19:13.863 "num_shared_buffers": 511, 00:19:13.863 "buf_cache_size": 4294967295, 00:19:13.863 "dif_insert_or_strip": false, 00:19:13.863 "zcopy": false, 00:19:13.863 "c2h_success": false, 00:19:13.863 "sock_priority": 0, 00:19:13.863 "abort_timeout_sec": 1, 00:19:13.863 "ack_timeout": 0, 00:19:13.863 "data_wr_pool_size": 0 00:19:13.864 } 00:19:13.864 }, 00:19:13.864 { 00:19:13.864 "method": "nvmf_create_subsystem", 00:19:13.864 "params": { 00:19:13.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.864 "allow_any_host": false, 00:19:13.864 "serial_number": "SPDK00000000000001", 00:19:13.864 "model_number": "SPDK bdev Controller", 00:19:13.864 "max_namespaces": 10, 00:19:13.864 "min_cntlid": 1, 00:19:13.864 "max_cntlid": 65519, 00:19:13.864 "ana_reporting": false 00:19:13.864 } 00:19:13.864 }, 00:19:13.864 { 00:19:13.864 "method": "nvmf_subsystem_add_host", 00:19:13.864 "params": { 00:19:13.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.864 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.864 "psk": "key0" 00:19:13.864 } 00:19:13.864 }, 00:19:13.864 { 00:19:13.864 "method": "nvmf_subsystem_add_ns", 00:19:13.864 "params": { 00:19:13.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.864 "namespace": { 00:19:13.864 "nsid": 1, 00:19:13.864 "bdev_name": "malloc0", 00:19:13.864 "nguid": "3EA85188EB964605B8F984BDBC22D8C6", 00:19:13.864 "uuid": "3ea85188-eb96-4605-b8f9-84bdbc22d8c6", 00:19:13.864 "no_auto_visible": false 00:19:13.864 } 00:19:13.864 } 00:19:13.864 }, 00:19:13.864 { 00:19:13.864 "method": "nvmf_subsystem_add_listener", 00:19:13.864 "params": { 00:19:13.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.864 "listen_address": { 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:13.864 "trtype": "TCP", 00:19:13.864 "adrfam": "IPv4", 00:19:13.864 "traddr": "10.0.0.2", 00:19:13.864 "trsvcid": "4420" 00:19:13.864 }, 00:19:13.864 "secure_channel": true 00:19:13.864 } 00:19:13.864 } 00:19:13.864 ] 00:19:13.864 } 00:19:13.864 ] 00:19:13.864 }' 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=635345 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 635345 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 635345 ']' 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:13.864 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.864 [2024-10-30 12:30:46.407492] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:13.864 [2024-10-30 12:30:46.407588] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.864 [2024-10-30 12:30:46.477533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.864 [2024-10-30 12:30:46.533965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.864 [2024-10-30 12:30:46.534024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.864 [2024-10-30 12:30:46.534043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.864 [2024-10-30 12:30:46.534055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.864 [2024-10-30 12:30:46.534065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.864 [2024-10-30 12:30:46.534787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.123 [2024-10-30 12:30:46.781761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.380 [2024-10-30 12:30:46.813796] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.380 [2024-10-30 12:30:46.814057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=635493 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 635493 /var/tmp/bdevperf.sock 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 635493 ']' 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:14.946 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:14.946 "subsystems": [ 00:19:14.946 { 00:19:14.946 "subsystem": "keyring", 00:19:14.946 "config": [ 00:19:14.946 { 00:19:14.946 "method": "keyring_file_add_key", 00:19:14.946 "params": { 00:19:14.946 "name": "key0", 00:19:14.946 "path": "/tmp/tmp.9YvOSgRkpn" 00:19:14.946 } 00:19:14.946 } 00:19:14.946 ] 00:19:14.946 }, 00:19:14.946 { 00:19:14.946 "subsystem": "iobuf", 00:19:14.946 "config": [ 00:19:14.946 { 00:19:14.946 "method": "iobuf_set_options", 00:19:14.946 "params": { 00:19:14.946 "small_pool_count": 8192, 00:19:14.946 "large_pool_count": 1024, 00:19:14.946 "small_bufsize": 8192, 00:19:14.946 "large_bufsize": 135168, 00:19:14.946 "enable_numa": false 00:19:14.946 } 00:19:14.946 } 00:19:14.946 ] 00:19:14.946 }, 00:19:14.946 { 00:19:14.946 "subsystem": "sock", 00:19:14.946 "config": [ 00:19:14.946 { 00:19:14.946 "method": "sock_set_default_impl", 00:19:14.946 "params": { 00:19:14.946 "impl_name": "posix" 00:19:14.946 } 00:19:14.946 }, 00:19:14.946 { 00:19:14.946 "method": "sock_impl_set_options", 00:19:14.946 "params": { 00:19:14.946 "impl_name": "ssl", 00:19:14.946 "recv_buf_size": 4096, 00:19:14.946 "send_buf_size": 4096, 00:19:14.947 "enable_recv_pipe": true, 00:19:14.947 "enable_quickack": false, 00:19:14.947 "enable_placement_id": 0, 00:19:14.947 "enable_zerocopy_send_server": true, 00:19:14.947 "enable_zerocopy_send_client": false, 00:19:14.947 "zerocopy_threshold": 0, 00:19:14.947 "tls_version": 0, 00:19:14.947 "enable_ktls": false 00:19:14.947 } 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "method": "sock_impl_set_options", 00:19:14.947 "params": { 00:19:14.947 "impl_name": "posix", 00:19:14.947 "recv_buf_size": 2097152, 00:19:14.947 "send_buf_size": 2097152, 00:19:14.947 "enable_recv_pipe": true, 00:19:14.947 "enable_quickack": false, 00:19:14.947 "enable_placement_id": 0, 00:19:14.947 "enable_zerocopy_send_server": true, 00:19:14.947 "enable_zerocopy_send_client": false, 00:19:14.947 "zerocopy_threshold": 0, 00:19:14.947 "tls_version": 0, 00:19:14.947 "enable_ktls": false 00:19:14.947 } 00:19:14.947 } 00:19:14.947 ] 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "subsystem": "vmd", 00:19:14.947 "config": [] 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "subsystem": "accel", 00:19:14.947 "config": [ 00:19:14.947 { 00:19:14.947 "method": "accel_set_options", 00:19:14.947 "params": { 00:19:14.947 "small_cache_size": 128, 00:19:14.947 "large_cache_size": 16, 00:19:14.947 "task_count": 2048, 00:19:14.947 "sequence_count": 2048, 00:19:14.947 "buf_count": 2048 00:19:14.947 } 00:19:14.947 } 00:19:14.947 ] 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "subsystem": "bdev", 00:19:14.947 "config": [ 00:19:14.947 { 00:19:14.947 "method": "bdev_set_options", 00:19:14.947 "params": { 00:19:14.947 "bdev_io_pool_size": 65535, 00:19:14.947 "bdev_io_cache_size": 256, 00:19:14.947 "bdev_auto_examine": true, 00:19:14.947 "iobuf_small_cache_size": 128, 00:19:14.947 "iobuf_large_cache_size": 16 00:19:14.947 } 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "method": "bdev_raid_set_options", 00:19:14.947 "params": { 00:19:14.947 "process_window_size_kb": 1024, 00:19:14.947 "process_max_bandwidth_mb_sec": 0 00:19:14.947 } 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "method": "bdev_iscsi_set_options", 00:19:14.947 "params": { 00:19:14.947 "timeout_sec": 30 00:19:14.947 } 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "method": "bdev_nvme_set_options", 00:19:14.947 "params": { 00:19:14.947 "action_on_timeout": "none", 00:19:14.947 "timeout_us": 0, 00:19:14.947 "timeout_admin_us": 0, 00:19:14.947 "keep_alive_timeout_ms": 10000, 00:19:14.947 "arbitration_burst": 0, 00:19:14.947 "low_priority_weight": 0, 00:19:14.947 "medium_priority_weight": 0, 00:19:14.947 "high_priority_weight": 0, 00:19:14.947 "nvme_adminq_poll_period_us": 10000, 00:19:14.947 "nvme_ioq_poll_period_us": 0, 00:19:14.947 "io_queue_requests": 512, 00:19:14.947 "delay_cmd_submit": true, 00:19:14.947 "transport_retry_count": 4, 00:19:14.947 "bdev_retry_count": 3, 00:19:14.947 "transport_ack_timeout": 0, 00:19:14.947 "ctrlr_loss_timeout_sec": 0, 00:19:14.947 "reconnect_delay_sec": 0, 00:19:14.947 "fast_io_fail_timeout_sec": 0, 00:19:14.947 "disable_auto_failback": false, 00:19:14.947 "generate_uuids": false, 00:19:14.947 "transport_tos": 0, 00:19:14.947 "nvme_error_stat": false, 00:19:14.947 "rdma_srq_size": 0, 00:19:14.947 "io_path_stat": false, 00:19:14.947 "allow_accel_sequence": false, 00:19:14.947 "rdma_max_cq_size": 0, 00:19:14.947 "rdma_cm_event_timeout_ms": 0, 00:19:14.947 "dhchap_digests": [ 00:19:14.947 "sha256", 00:19:14.947 "sha384", 00:19:14.947 "sha512" 00:19:14.947 ], 00:19:14.947 "dhchap_dhgroups": [ 00:19:14.947 "null", 00:19:14.947 "ffdhe2048", 00:19:14.947 "ffdhe3072", 00:19:14.947 "ffdhe4096", 00:19:14.947 "ffdhe6144", 00:19:14.947 "ffdhe8192" 00:19:14.947 ] 00:19:14.947 } 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "method": "bdev_nvme_attach_controller", 00:19:14.947 "params": { 00:19:14.947 "name": "TLSTEST", 00:19:14.947 "trtype": "TCP", 00:19:14.947 "adrfam": "IPv4", 00:19:14.947 "traddr": "10.0.0.2", 00:19:14.947 "trsvcid": "4420", 00:19:14.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.947 "prchk_reftag": false, 00:19:14.947 "prchk_guard": false, 00:19:14.947 "ctrlr_loss_timeout_sec": 0, 00:19:14.947 "reconnect_delay_sec": 0, 00:19:14.947 "fast_io_fail_timeout_sec": 0, 00:19:14.947 "psk": "key0", 00:19:14.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.947 "hdgst": false, 00:19:14.947 "ddgst": false, 00:19:14.947 "multipath": "multipath" 00:19:14.947 } 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "method": "bdev_nvme_set_hotplug", 00:19:14.947 "params": { 00:19:14.947 "period_us": 100000, 00:19:14.947 "enable": false 00:19:14.947 } 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "method": "bdev_wait_for_examine" 00:19:14.947 } 00:19:14.947 ] 00:19:14.947 }, 00:19:14.947 { 00:19:14.947 "subsystem": "nbd", 00:19:14.947 "config": [] 00:19:14.947 } 00:19:14.947 ] 00:19:14.947 }' 00:19:14.947 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.947 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:14.947 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.947 [2024-10-30 12:30:47.523719] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:14.947 [2024-10-30 12:30:47.523817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635493 ] 00:19:14.947 [2024-10-30 12:30:47.589746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.206 [2024-10-30 12:30:47.647539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.206 [2024-10-30 12:30:47.826068] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.465 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.465 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:15.465 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:15.465 Running I/O for 10 seconds... 00:19:17.772 3471.00 IOPS, 13.56 MiB/s [2024-10-30T11:30:51.468Z] 3568.00 IOPS, 13.94 MiB/s [2024-10-30T11:30:52.427Z] 3561.67 IOPS, 13.91 MiB/s [2024-10-30T11:30:53.360Z] 3554.75 IOPS, 13.89 MiB/s [2024-10-30T11:30:54.291Z] 3581.60 IOPS, 13.99 MiB/s [2024-10-30T11:30:55.221Z] 3580.33 IOPS, 13.99 MiB/s [2024-10-30T11:30:56.151Z] 3590.43 IOPS, 14.03 MiB/s [2024-10-30T11:30:57.522Z] 3600.38 IOPS, 14.06 MiB/s [2024-10-30T11:30:58.456Z] 3608.33 IOPS, 14.10 MiB/s [2024-10-30T11:30:58.456Z] 3619.30 IOPS, 14.14 MiB/s 00:19:25.775 Latency(us) 00:19:25.775 [2024-10-30T11:30:58.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.775 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.775 Verification LBA range: start 0x0 length 0x2000 00:19:25.775 TLSTESTn1 : 10.02 3625.03 14.16 0.00 0.00 35251.80 6990.51 53205.52 00:19:25.775 [2024-10-30T11:30:58.456Z] =================================================================================================================== 00:19:25.775 [2024-10-30T11:30:58.456Z] Total : 3625.03 14.16 0.00 0.00 35251.80 6990.51 53205.52 00:19:25.775 { 00:19:25.775 "results": [ 00:19:25.775 { 00:19:25.775 "job": "TLSTESTn1", 00:19:25.775 "core_mask": "0x4", 00:19:25.775 "workload": "verify", 00:19:25.775 "status": "finished", 00:19:25.775 "verify_range": { 00:19:25.775 "start": 0, 00:19:25.775 "length": 8192 00:19:25.775 }, 00:19:25.775 "queue_depth": 128, 00:19:25.775 "io_size": 4096, 00:19:25.775 "runtime": 10.019216, 00:19:25.775 "iops": 3625.034134407323, 00:19:25.775 "mibps": 14.160289587528606, 00:19:25.775 "io_failed": 0, 00:19:25.775 "io_timeout": 0, 00:19:25.775 "avg_latency_us": 35251.797959862946, 00:19:25.775 "min_latency_us": 6990.506666666667, 00:19:25.775 "max_latency_us": 53205.52296296296 00:19:25.775 } 00:19:25.775 ], 00:19:25.775 "core_count": 1 00:19:25.775 } 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 635493 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 635493 ']' 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 635493 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 635493 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 635493' 00:19:25.775 killing process with pid 635493 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 635493 00:19:25.775 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.775 00:19:25.775 Latency(us) 00:19:25.775 [2024-10-30T11:30:58.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.775 [2024-10-30T11:30:58.456Z] =================================================================================================================== 00:19:25.775 [2024-10-30T11:30:58.456Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 635493 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 635345 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 635345 ']' 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 635345 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 635345 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 635345' 00:19:25.775 killing process with pid 635345 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 635345 00:19:25.775 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 635345 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=636719 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 636719 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 636719 ']' 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:26.033 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.033 [2024-10-30 12:30:58.664059] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:26.033 [2024-10-30 12:30:58.664160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.292 [2024-10-30 12:30:58.739412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.292 [2024-10-30 12:30:58.793234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.292 [2024-10-30 12:30:58.793303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.292 [2024-10-30 12:30:58.793327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.292 [2024-10-30 12:30:58.793337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.292 [2024-10-30 12:30:58.793346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.292 [2024-10-30 12:30:58.793912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.292 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:26.292 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:26.292 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.292 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:26.292 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.292 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.292 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.9YvOSgRkpn 00:19:26.292 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9YvOSgRkpn 00:19:26.292 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:26.551 [2024-10-30 12:30:59.175313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.551 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:26.809 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.067 [2024-10-30 12:30:59.704762] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.067 [2024-10-30 12:30:59.705022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.067 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:27.325 malloc0 00:19:27.582 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:27.840 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9YvOSgRkpn 00:19:28.099 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.357 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=637000 00:19:28.357 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:28.357 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.357 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 637000 /var/tmp/bdevperf.sock 00:19:28.357 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 637000 ']' 00:19:28.357 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.357 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:28.357 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.357 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:28.357 12:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.357 [2024-10-30 12:31:00.901839] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:28.357 [2024-10-30 12:31:00.901917] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637000 ] 00:19:28.357 [2024-10-30 12:31:00.966844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.357 [2024-10-30 12:31:01.023559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.615 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:28.615 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:28.615 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9YvOSgRkpn 00:19:28.872 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:29.130 [2024-10-30 12:31:01.675451] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.130 nvme0n1 00:19:29.130 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:29.387 Running I/O for 1 seconds... 00:19:30.319 3520.00 IOPS, 13.75 MiB/s 00:19:30.319 Latency(us) 00:19:30.319 [2024-10-30T11:31:03.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.319 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:30.319 Verification LBA range: start 0x0 length 0x2000 00:19:30.319 nvme0n1 : 1.02 3581.77 13.99 0.00 0.00 35404.02 7233.23 35146.71 00:19:30.319 [2024-10-30T11:31:03.000Z] =================================================================================================================== 00:19:30.319 [2024-10-30T11:31:03.000Z] Total : 3581.77 13.99 0.00 0.00 35404.02 7233.23 35146.71 00:19:30.319 { 00:19:30.319 "results": [ 00:19:30.319 { 00:19:30.319 "job": "nvme0n1", 00:19:30.319 "core_mask": "0x2", 00:19:30.319 "workload": "verify", 00:19:30.319 "status": "finished", 00:19:30.319 "verify_range": { 00:19:30.319 "start": 0, 00:19:30.319 "length": 8192 00:19:30.319 }, 00:19:30.319 "queue_depth": 128, 00:19:30.319 "io_size": 4096, 00:19:30.319 "runtime": 1.01849, 00:19:30.319 "iops": 3581.773016917201, 00:19:30.319 "mibps": 13.991300847332816, 00:19:30.319 "io_failed": 0, 00:19:30.319 "io_timeout": 0, 00:19:30.319 "avg_latency_us": 35404.018193632226, 00:19:30.319 "min_latency_us": 7233.2325925925925, 00:19:30.319 "max_latency_us": 35146.71407407407 00:19:30.319 } 00:19:30.319 ], 00:19:30.319 "core_count": 1 00:19:30.319 } 00:19:30.319 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 637000 00:19:30.319 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 637000 ']' 00:19:30.319 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 637000 00:19:30.319 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:30.319 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.319 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 637000 00:19:30.319 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:30.319 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:30.319 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 637000' 00:19:30.319 killing process with pid 637000 00:19:30.319 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 637000 00:19:30.320 Received shutdown signal, test time was about 1.000000 seconds 00:19:30.320 00:19:30.320 Latency(us) 00:19:30.320 [2024-10-30T11:31:03.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.320 [2024-10-30T11:31:03.001Z] =================================================================================================================== 00:19:30.320 [2024-10-30T11:31:03.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.320 12:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 637000 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 636719 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 636719 ']' 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 636719 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 636719 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 636719' 00:19:30.576 killing process with pid 636719 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 636719 00:19:30.576 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 636719 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=637391 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 637391 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 637391 ']' 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.833 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.833 [2024-10-30 12:31:03.458420] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:30.833 [2024-10-30 12:31:03.458508] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.091 [2024-10-30 12:31:03.530445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.091 [2024-10-30 12:31:03.589482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.091 [2024-10-30 12:31:03.589548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.091 [2024-10-30 12:31:03.589562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.091 [2024-10-30 12:31:03.589573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.091 [2024-10-30 12:31:03.589590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.091 [2024-10-30 12:31:03.590183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.091 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.091 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:31.091 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.091 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:31.091 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.091 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.091 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:31.091 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.091 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.091 [2024-10-30 12:31:03.735712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.091 malloc0 00:19:31.091 [2024-10-30 12:31:03.766047] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.091 [2024-10-30 12:31:03.766316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.349 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.349 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=637415 00:19:31.349 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:31.349 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 637415 /var/tmp/bdevperf.sock 00:19:31.349 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 637415 ']' 00:19:31.349 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.349 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:31.349 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.349 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:31.349 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.349 [2024-10-30 12:31:03.837355] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:31.349 [2024-10-30 12:31:03.837421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637415 ] 00:19:31.349 [2024-10-30 12:31:03.902455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.349 [2024-10-30 12:31:03.959087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.607 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.607 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:31.607 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9YvOSgRkpn 00:19:31.865 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:32.123 [2024-10-30 12:31:04.691286] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.123 nvme0n1 00:19:32.123 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.381 Running I/O for 1 seconds... 00:19:33.315 3517.00 IOPS, 13.74 MiB/s 00:19:33.315 Latency(us) 00:19:33.315 [2024-10-30T11:31:05.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.315 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:33.315 Verification LBA range: start 0x0 length 0x2000 00:19:33.315 nvme0n1 : 1.02 3575.69 13.97 0.00 0.00 35473.03 6068.15 34952.53 00:19:33.315 [2024-10-30T11:31:05.996Z] =================================================================================================================== 00:19:33.315 [2024-10-30T11:31:05.996Z] Total : 3575.69 13.97 0.00 0.00 35473.03 6068.15 34952.53 00:19:33.315 { 00:19:33.315 "results": [ 00:19:33.315 { 00:19:33.315 "job": "nvme0n1", 00:19:33.315 "core_mask": "0x2", 00:19:33.315 "workload": "verify", 00:19:33.315 "status": "finished", 00:19:33.315 "verify_range": { 00:19:33.315 "start": 0, 00:19:33.315 "length": 8192 00:19:33.315 }, 00:19:33.315 "queue_depth": 128, 00:19:33.315 "io_size": 4096, 00:19:33.315 "runtime": 1.019383, 00:19:33.315 "iops": 3575.6923550814563, 00:19:33.315 "mibps": 13.967548262036939, 00:19:33.315 "io_failed": 0, 00:19:33.315 "io_timeout": 0, 00:19:33.315 "avg_latency_us": 35473.0289488391, 00:19:33.315 "min_latency_us": 6068.148148148148, 00:19:33.315 "max_latency_us": 34952.53333333333 00:19:33.315 } 00:19:33.315 ], 00:19:33.315 "core_count": 1 00:19:33.315 } 00:19:33.315 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:33.315 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.315 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.573 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.573 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:33.573 "subsystems": [ 00:19:33.573 { 00:19:33.573 "subsystem": "keyring", 00:19:33.573 "config": [ 00:19:33.573 { 00:19:33.573 "method": "keyring_file_add_key", 00:19:33.573 "params": { 00:19:33.573 "name": "key0", 00:19:33.573 "path": "/tmp/tmp.9YvOSgRkpn" 00:19:33.573 } 00:19:33.573 } 00:19:33.573 ] 00:19:33.573 }, 00:19:33.573 { 00:19:33.573 "subsystem": "iobuf", 00:19:33.573 "config": [ 00:19:33.573 { 00:19:33.573 "method": "iobuf_set_options", 00:19:33.573 "params": { 00:19:33.573 "small_pool_count": 8192, 00:19:33.573 "large_pool_count": 1024, 00:19:33.573 "small_bufsize": 8192, 00:19:33.573 "large_bufsize": 135168, 00:19:33.573 "enable_numa": false 00:19:33.573 } 00:19:33.573 } 00:19:33.573 ] 00:19:33.573 }, 00:19:33.573 { 00:19:33.573 "subsystem": "sock", 00:19:33.573 "config": [ 00:19:33.573 { 00:19:33.573 "method": "sock_set_default_impl", 00:19:33.573 "params": { 00:19:33.573 "impl_name": "posix" 00:19:33.573 } 00:19:33.573 }, 00:19:33.573 { 00:19:33.573 "method": "sock_impl_set_options", 00:19:33.573 "params": { 00:19:33.573 "impl_name": "ssl", 00:19:33.573 "recv_buf_size": 4096, 00:19:33.573 "send_buf_size": 4096, 00:19:33.573 "enable_recv_pipe": true, 00:19:33.573 "enable_quickack": false, 00:19:33.573 "enable_placement_id": 0, 00:19:33.573 "enable_zerocopy_send_server": true, 00:19:33.573 "enable_zerocopy_send_client": false, 00:19:33.573 "zerocopy_threshold": 0, 00:19:33.573 "tls_version": 0, 00:19:33.573 "enable_ktls": false 00:19:33.573 } 00:19:33.573 }, 00:19:33.573 { 00:19:33.573 "method": "sock_impl_set_options", 00:19:33.573 "params": { 00:19:33.573 "impl_name": "posix", 00:19:33.573 "recv_buf_size": 2097152, 00:19:33.573 "send_buf_size": 2097152, 00:19:33.573 "enable_recv_pipe": true, 00:19:33.573 "enable_quickack": false, 00:19:33.574 "enable_placement_id": 0, 00:19:33.574 "enable_zerocopy_send_server": true, 00:19:33.574 "enable_zerocopy_send_client": false, 00:19:33.574 "zerocopy_threshold": 0, 00:19:33.574 "tls_version": 0, 00:19:33.574 "enable_ktls": false 00:19:33.574 } 00:19:33.574 } 00:19:33.574 ] 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "subsystem": "vmd", 00:19:33.574 "config": [] 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "subsystem": "accel", 00:19:33.574 "config": [ 00:19:33.574 { 00:19:33.574 "method": "accel_set_options", 00:19:33.574 "params": { 00:19:33.574 "small_cache_size": 128, 00:19:33.574 "large_cache_size": 16, 00:19:33.574 "task_count": 2048, 00:19:33.574 "sequence_count": 2048, 00:19:33.574 "buf_count": 2048 00:19:33.574 } 00:19:33.574 } 00:19:33.574 ] 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "subsystem": "bdev", 00:19:33.574 "config": [ 00:19:33.574 { 00:19:33.574 "method": "bdev_set_options", 00:19:33.574 "params": { 00:19:33.574 "bdev_io_pool_size": 65535, 00:19:33.574 "bdev_io_cache_size": 256, 00:19:33.574 "bdev_auto_examine": true, 00:19:33.574 "iobuf_small_cache_size": 128, 00:19:33.574 "iobuf_large_cache_size": 16 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "bdev_raid_set_options", 00:19:33.574 "params": { 00:19:33.574 "process_window_size_kb": 1024, 00:19:33.574 "process_max_bandwidth_mb_sec": 0 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "bdev_iscsi_set_options", 00:19:33.574 "params": { 00:19:33.574 "timeout_sec": 30 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "bdev_nvme_set_options", 00:19:33.574 "params": { 00:19:33.574 "action_on_timeout": "none", 00:19:33.574 "timeout_us": 0, 00:19:33.574 "timeout_admin_us": 0, 00:19:33.574 "keep_alive_timeout_ms": 10000, 00:19:33.574 "arbitration_burst": 0, 00:19:33.574 "low_priority_weight": 0, 00:19:33.574 "medium_priority_weight": 0, 00:19:33.574 "high_priority_weight": 0, 00:19:33.574 "nvme_adminq_poll_period_us": 10000, 00:19:33.574 "nvme_ioq_poll_period_us": 0, 00:19:33.574 "io_queue_requests": 0, 00:19:33.574 "delay_cmd_submit": true, 00:19:33.574 "transport_retry_count": 4, 00:19:33.574 "bdev_retry_count": 3, 00:19:33.574 "transport_ack_timeout": 0, 00:19:33.574 "ctrlr_loss_timeout_sec": 0, 00:19:33.574 "reconnect_delay_sec": 0, 00:19:33.574 "fast_io_fail_timeout_sec": 0, 00:19:33.574 "disable_auto_failback": false, 00:19:33.574 "generate_uuids": false, 00:19:33.574 "transport_tos": 0, 00:19:33.574 "nvme_error_stat": false, 00:19:33.574 "rdma_srq_size": 0, 00:19:33.574 "io_path_stat": false, 00:19:33.574 "allow_accel_sequence": false, 00:19:33.574 "rdma_max_cq_size": 0, 00:19:33.574 "rdma_cm_event_timeout_ms": 0, 00:19:33.574 "dhchap_digests": [ 00:19:33.574 "sha256", 00:19:33.574 "sha384", 00:19:33.574 "sha512" 00:19:33.574 ], 00:19:33.574 "dhchap_dhgroups": [ 00:19:33.574 "null", 00:19:33.574 "ffdhe2048", 00:19:33.574 "ffdhe3072", 00:19:33.574 "ffdhe4096", 00:19:33.574 "ffdhe6144", 00:19:33.574 "ffdhe8192" 00:19:33.574 ] 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "bdev_nvme_set_hotplug", 00:19:33.574 "params": { 00:19:33.574 "period_us": 100000, 00:19:33.574 "enable": false 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "bdev_malloc_create", 00:19:33.574 "params": { 00:19:33.574 "name": "malloc0", 00:19:33.574 "num_blocks": 8192, 00:19:33.574 "block_size": 4096, 00:19:33.574 "physical_block_size": 4096, 00:19:33.574 "uuid": "c9a30bd7-a2e8-41ac-9312-0adbbaa009ca", 00:19:33.574 "optimal_io_boundary": 0, 00:19:33.574 "md_size": 0, 00:19:33.574 "dif_type": 0, 00:19:33.574 "dif_is_head_of_md": false, 00:19:33.574 "dif_pi_format": 0 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "bdev_wait_for_examine" 00:19:33.574 } 00:19:33.574 ] 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "subsystem": "nbd", 00:19:33.574 "config": [] 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "subsystem": "scheduler", 00:19:33.574 "config": [ 00:19:33.574 { 00:19:33.574 "method": "framework_set_scheduler", 00:19:33.574 "params": { 00:19:33.574 "name": "static" 00:19:33.574 } 00:19:33.574 } 00:19:33.574 ] 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "subsystem": "nvmf", 00:19:33.574 "config": [ 00:19:33.574 { 00:19:33.574 "method": "nvmf_set_config", 00:19:33.574 "params": { 00:19:33.574 "discovery_filter": "match_any", 00:19:33.574 "admin_cmd_passthru": { 00:19:33.574 "identify_ctrlr": false 00:19:33.574 }, 00:19:33.574 "dhchap_digests": [ 00:19:33.574 "sha256", 00:19:33.574 "sha384", 00:19:33.574 "sha512" 00:19:33.574 ], 00:19:33.574 "dhchap_dhgroups": [ 00:19:33.574 "null", 00:19:33.574 "ffdhe2048", 00:19:33.574 "ffdhe3072", 00:19:33.574 "ffdhe4096", 00:19:33.574 "ffdhe6144", 00:19:33.574 "ffdhe8192" 00:19:33.574 ] 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "nvmf_set_max_subsystems", 00:19:33.574 "params": { 00:19:33.574 "max_subsystems": 1024 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "nvmf_set_crdt", 00:19:33.574 "params": { 00:19:33.574 "crdt1": 0, 00:19:33.574 "crdt2": 0, 00:19:33.574 "crdt3": 0 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "nvmf_create_transport", 00:19:33.574 "params": { 00:19:33.574 "trtype": "TCP", 00:19:33.574 "max_queue_depth": 128, 00:19:33.574 "max_io_qpairs_per_ctrlr": 127, 00:19:33.574 "in_capsule_data_size": 4096, 00:19:33.574 "max_io_size": 131072, 00:19:33.574 "io_unit_size": 131072, 00:19:33.574 "max_aq_depth": 128, 00:19:33.574 "num_shared_buffers": 511, 00:19:33.574 "buf_cache_size": 4294967295, 00:19:33.574 "dif_insert_or_strip": false, 00:19:33.574 "zcopy": false, 00:19:33.574 "c2h_success": false, 00:19:33.574 "sock_priority": 0, 00:19:33.574 "abort_timeout_sec": 1, 00:19:33.574 "ack_timeout": 0, 00:19:33.574 "data_wr_pool_size": 0 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "nvmf_create_subsystem", 00:19:33.574 "params": { 00:19:33.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.574 "allow_any_host": false, 00:19:33.574 "serial_number": "00000000000000000000", 00:19:33.574 "model_number": "SPDK bdev Controller", 00:19:33.574 "max_namespaces": 32, 00:19:33.574 "min_cntlid": 1, 00:19:33.574 "max_cntlid": 65519, 00:19:33.574 "ana_reporting": false 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "nvmf_subsystem_add_host", 00:19:33.574 "params": { 00:19:33.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.574 "host": "nqn.2016-06.io.spdk:host1", 00:19:33.574 "psk": "key0" 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "nvmf_subsystem_add_ns", 00:19:33.574 "params": { 00:19:33.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.574 "namespace": { 00:19:33.574 "nsid": 1, 00:19:33.574 "bdev_name": "malloc0", 00:19:33.574 "nguid": "C9A30BD7A2E841AC93120ADBBAA009CA", 00:19:33.574 "uuid": "c9a30bd7-a2e8-41ac-9312-0adbbaa009ca", 00:19:33.574 "no_auto_visible": false 00:19:33.574 } 00:19:33.574 } 00:19:33.574 }, 00:19:33.574 { 00:19:33.574 "method": "nvmf_subsystem_add_listener", 00:19:33.574 "params": { 00:19:33.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.574 "listen_address": { 00:19:33.574 "trtype": "TCP", 00:19:33.574 "adrfam": "IPv4", 00:19:33.574 "traddr": "10.0.0.2", 00:19:33.574 "trsvcid": "4420" 00:19:33.574 }, 00:19:33.574 "secure_channel": false, 00:19:33.574 "sock_impl": "ssl" 00:19:33.574 } 00:19:33.574 } 00:19:33.574 ] 00:19:33.574 } 00:19:33.574 ] 00:19:33.574 }' 00:19:33.574 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:33.832 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:33.832 "subsystems": [ 00:19:33.832 { 00:19:33.832 "subsystem": "keyring", 00:19:33.832 "config": [ 00:19:33.832 { 00:19:33.832 "method": "keyring_file_add_key", 00:19:33.832 "params": { 00:19:33.832 "name": "key0", 00:19:33.832 "path": "/tmp/tmp.9YvOSgRkpn" 00:19:33.832 } 00:19:33.832 } 00:19:33.832 ] 00:19:33.832 }, 00:19:33.832 { 00:19:33.832 "subsystem": "iobuf", 00:19:33.832 "config": [ 00:19:33.832 { 00:19:33.833 "method": "iobuf_set_options", 00:19:33.833 "params": { 00:19:33.833 "small_pool_count": 8192, 00:19:33.833 "large_pool_count": 1024, 00:19:33.833 "small_bufsize": 8192, 00:19:33.833 "large_bufsize": 135168, 00:19:33.833 "enable_numa": false 00:19:33.833 } 00:19:33.833 } 00:19:33.833 ] 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "subsystem": "sock", 00:19:33.833 "config": [ 00:19:33.833 { 00:19:33.833 "method": "sock_set_default_impl", 00:19:33.833 "params": { 00:19:33.833 "impl_name": "posix" 00:19:33.833 } 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "method": "sock_impl_set_options", 00:19:33.833 "params": { 00:19:33.833 "impl_name": "ssl", 00:19:33.833 "recv_buf_size": 4096, 00:19:33.833 "send_buf_size": 4096, 00:19:33.833 "enable_recv_pipe": true, 00:19:33.833 "enable_quickack": false, 00:19:33.833 "enable_placement_id": 0, 00:19:33.833 "enable_zerocopy_send_server": true, 00:19:33.833 "enable_zerocopy_send_client": false, 00:19:33.833 "zerocopy_threshold": 0, 00:19:33.833 "tls_version": 0, 00:19:33.833 "enable_ktls": false 00:19:33.833 } 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "method": "sock_impl_set_options", 00:19:33.833 "params": { 00:19:33.833 "impl_name": "posix", 00:19:33.833 "recv_buf_size": 2097152, 00:19:33.833 "send_buf_size": 2097152, 00:19:33.833 "enable_recv_pipe": true, 00:19:33.833 "enable_quickack": false, 00:19:33.833 "enable_placement_id": 0, 00:19:33.833 "enable_zerocopy_send_server": true, 00:19:33.833 "enable_zerocopy_send_client": false, 00:19:33.833 "zerocopy_threshold": 0, 00:19:33.833 "tls_version": 0, 00:19:33.833 "enable_ktls": false 00:19:33.833 } 00:19:33.833 } 00:19:33.833 ] 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "subsystem": "vmd", 00:19:33.833 "config": [] 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "subsystem": "accel", 00:19:33.833 "config": [ 00:19:33.833 { 00:19:33.833 "method": "accel_set_options", 00:19:33.833 "params": { 00:19:33.833 "small_cache_size": 128, 00:19:33.833 "large_cache_size": 16, 00:19:33.833 "task_count": 2048, 00:19:33.833 "sequence_count": 2048, 00:19:33.833 "buf_count": 2048 00:19:33.833 } 00:19:33.833 } 00:19:33.833 ] 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "subsystem": "bdev", 00:19:33.833 "config": [ 00:19:33.833 { 00:19:33.833 "method": "bdev_set_options", 00:19:33.833 "params": { 00:19:33.833 "bdev_io_pool_size": 65535, 00:19:33.833 "bdev_io_cache_size": 256, 00:19:33.833 "bdev_auto_examine": true, 00:19:33.833 "iobuf_small_cache_size": 128, 00:19:33.833 "iobuf_large_cache_size": 16 00:19:33.833 } 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "method": "bdev_raid_set_options", 00:19:33.833 "params": { 00:19:33.833 "process_window_size_kb": 1024, 00:19:33.833 "process_max_bandwidth_mb_sec": 0 00:19:33.833 } 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "method": "bdev_iscsi_set_options", 00:19:33.833 "params": { 00:19:33.833 "timeout_sec": 30 00:19:33.833 } 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "method": "bdev_nvme_set_options", 00:19:33.833 "params": { 00:19:33.833 "action_on_timeout": "none", 00:19:33.833 "timeout_us": 0, 00:19:33.833 "timeout_admin_us": 0, 00:19:33.833 "keep_alive_timeout_ms": 10000, 00:19:33.833 "arbitration_burst": 0, 00:19:33.833 "low_priority_weight": 0, 00:19:33.833 "medium_priority_weight": 0, 00:19:33.833 "high_priority_weight": 0, 00:19:33.833 "nvme_adminq_poll_period_us": 10000, 00:19:33.833 "nvme_ioq_poll_period_us": 0, 00:19:33.833 "io_queue_requests": 512, 00:19:33.833 "delay_cmd_submit": true, 00:19:33.833 "transport_retry_count": 4, 00:19:33.833 "bdev_retry_count": 3, 00:19:33.833 "transport_ack_timeout": 0, 00:19:33.833 "ctrlr_loss_timeout_sec": 0, 00:19:33.833 "reconnect_delay_sec": 0, 00:19:33.833 "fast_io_fail_timeout_sec": 0, 00:19:33.833 "disable_auto_failback": false, 00:19:33.833 "generate_uuids": false, 00:19:33.833 "transport_tos": 0, 00:19:33.833 "nvme_error_stat": false, 00:19:33.833 "rdma_srq_size": 0, 00:19:33.833 "io_path_stat": false, 00:19:33.833 "allow_accel_sequence": false, 00:19:33.833 "rdma_max_cq_size": 0, 00:19:33.833 "rdma_cm_event_timeout_ms": 0, 00:19:33.833 "dhchap_digests": [ 00:19:33.833 "sha256", 00:19:33.833 "sha384", 00:19:33.833 "sha512" 00:19:33.833 ], 00:19:33.833 "dhchap_dhgroups": [ 00:19:33.833 "null", 00:19:33.833 "ffdhe2048", 00:19:33.833 "ffdhe3072", 00:19:33.833 "ffdhe4096", 00:19:33.833 "ffdhe6144", 00:19:33.833 "ffdhe8192" 00:19:33.833 ] 00:19:33.833 } 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "method": "bdev_nvme_attach_controller", 00:19:33.833 "params": { 00:19:33.833 "name": "nvme0", 00:19:33.833 "trtype": "TCP", 00:19:33.833 "adrfam": "IPv4", 00:19:33.833 "traddr": "10.0.0.2", 00:19:33.833 "trsvcid": "4420", 00:19:33.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.833 "prchk_reftag": false, 00:19:33.833 "prchk_guard": false, 00:19:33.833 "ctrlr_loss_timeout_sec": 0, 00:19:33.833 "reconnect_delay_sec": 0, 00:19:33.833 "fast_io_fail_timeout_sec": 0, 00:19:33.833 "psk": "key0", 00:19:33.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.833 "hdgst": false, 00:19:33.833 "ddgst": false, 00:19:33.833 "multipath": "multipath" 00:19:33.833 } 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "method": "bdev_nvme_set_hotplug", 00:19:33.833 "params": { 00:19:33.833 "period_us": 100000, 00:19:33.833 "enable": false 00:19:33.833 } 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "method": "bdev_enable_histogram", 00:19:33.833 "params": { 00:19:33.833 "name": "nvme0n1", 00:19:33.833 "enable": true 00:19:33.833 } 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "method": "bdev_wait_for_examine" 00:19:33.833 } 00:19:33.833 ] 00:19:33.833 }, 00:19:33.833 { 00:19:33.833 "subsystem": "nbd", 00:19:33.833 "config": [] 00:19:33.833 } 00:19:33.833 ] 00:19:33.833 }' 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 637415 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 637415 ']' 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 637415 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 637415 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 637415' 00:19:33.833 killing process with pid 637415 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 637415 00:19:33.833 Received shutdown signal, test time was about 1.000000 seconds 00:19:33.833 00:19:33.833 Latency(us) 00:19:33.833 [2024-10-30T11:31:06.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.833 [2024-10-30T11:31:06.514Z] =================================================================================================================== 00:19:33.833 [2024-10-30T11:31:06.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.833 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 637415 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 637391 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 637391 ']' 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 637391 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 637391 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 637391' 00:19:34.091 killing process with pid 637391 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 637391 00:19:34.091 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 637391 00:19:34.350 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:34.350 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.350 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:34.350 "subsystems": [ 00:19:34.350 { 00:19:34.350 "subsystem": "keyring", 00:19:34.350 "config": [ 00:19:34.350 { 00:19:34.350 "method": "keyring_file_add_key", 00:19:34.350 "params": { 00:19:34.350 "name": "key0", 00:19:34.350 "path": "/tmp/tmp.9YvOSgRkpn" 00:19:34.350 } 00:19:34.350 } 00:19:34.350 ] 00:19:34.350 }, 00:19:34.350 { 00:19:34.350 "subsystem": "iobuf", 00:19:34.350 "config": [ 00:19:34.350 { 00:19:34.350 "method": "iobuf_set_options", 00:19:34.350 "params": { 00:19:34.350 "small_pool_count": 8192, 00:19:34.350 "large_pool_count": 1024, 00:19:34.350 "small_bufsize": 8192, 00:19:34.350 "large_bufsize": 135168, 00:19:34.350 "enable_numa": false 00:19:34.350 } 00:19:34.350 } 00:19:34.350 ] 00:19:34.350 }, 00:19:34.350 { 00:19:34.350 "subsystem": "sock", 00:19:34.350 "config": [ 00:19:34.350 { 00:19:34.350 "method": "sock_set_default_impl", 00:19:34.350 "params": { 00:19:34.350 "impl_name": "posix" 00:19:34.350 } 00:19:34.350 }, 00:19:34.350 { 00:19:34.350 "method": "sock_impl_set_options", 00:19:34.350 "params": { 00:19:34.350 "impl_name": "ssl", 00:19:34.350 "recv_buf_size": 4096, 00:19:34.350 "send_buf_size": 4096, 00:19:34.350 "enable_recv_pipe": true, 00:19:34.350 "enable_quickack": false, 00:19:34.350 "enable_placement_id": 0, 00:19:34.350 "enable_zerocopy_send_server": true, 00:19:34.350 "enable_zerocopy_send_client": false, 00:19:34.350 "zerocopy_threshold": 0, 00:19:34.350 "tls_version": 0, 00:19:34.350 "enable_ktls": false 00:19:34.350 } 00:19:34.350 }, 00:19:34.350 { 00:19:34.350 "method": "sock_impl_set_options", 00:19:34.350 "params": { 00:19:34.350 "impl_name": "posix", 00:19:34.350 "recv_buf_size": 2097152, 00:19:34.350 "send_buf_size": 2097152, 00:19:34.350 "enable_recv_pipe": true, 00:19:34.350 "enable_quickack": false, 00:19:34.350 "enable_placement_id": 0, 00:19:34.350 "enable_zerocopy_send_server": true, 00:19:34.350 "enable_zerocopy_send_client": false, 00:19:34.350 "zerocopy_threshold": 0, 00:19:34.350 "tls_version": 0, 00:19:34.350 "enable_ktls": false 00:19:34.350 } 00:19:34.350 } 00:19:34.351 ] 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "subsystem": "vmd", 00:19:34.351 "config": [] 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "subsystem": "accel", 00:19:34.351 "config": [ 00:19:34.351 { 00:19:34.351 "method": "accel_set_options", 00:19:34.351 "params": { 00:19:34.351 "small_cache_size": 128, 00:19:34.351 "large_cache_size": 16, 00:19:34.351 "task_count": 2048, 00:19:34.351 "sequence_count": 2048, 00:19:34.351 "buf_count": 2048 00:19:34.351 } 00:19:34.351 } 00:19:34.351 ] 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "subsystem": "bdev", 00:19:34.351 "config": [ 00:19:34.351 { 00:19:34.351 "method": "bdev_set_options", 00:19:34.351 "params": { 00:19:34.351 "bdev_io_pool_size": 65535, 00:19:34.351 "bdev_io_cache_size": 256, 00:19:34.351 "bdev_auto_examine": true, 00:19:34.351 "iobuf_small_cache_size": 128, 00:19:34.351 "iobuf_large_cache_size": 16 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "bdev_raid_set_options", 00:19:34.351 "params": { 00:19:34.351 "process_window_size_kb": 1024, 00:19:34.351 "process_max_bandwidth_mb_sec": 0 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "bdev_iscsi_set_options", 00:19:34.351 "params": { 00:19:34.351 "timeout_sec": 30 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "bdev_nvme_set_options", 00:19:34.351 "params": { 00:19:34.351 "action_on_timeout": "none", 00:19:34.351 "timeout_us": 0, 00:19:34.351 "timeout_admin_us": 0, 00:19:34.351 "keep_alive_timeout_ms": 10000, 00:19:34.351 "arbitration_burst": 0, 00:19:34.351 "low_priority_weight": 0, 00:19:34.351 "medium_priority_weight": 0, 00:19:34.351 "high_priority_weight": 0, 00:19:34.351 "nvme_adminq_poll_period_us": 10000, 00:19:34.351 "nvme_ioq_poll_period_us": 0, 00:19:34.351 "io_queue_requests": 0, 00:19:34.351 "delay_cmd_submit": true, 00:19:34.351 "transport_retry_count": 4, 00:19:34.351 "bdev_retry_count": 3, 00:19:34.351 "transport_ack_timeout": 0, 00:19:34.351 "ctrlr_loss_timeout_sec": 0, 00:19:34.351 "reconnect_delay_sec": 0, 00:19:34.351 "fast_io_fail_timeout_sec": 0, 00:19:34.351 "disable_auto_failback": false, 00:19:34.351 "generate_uuids": false, 00:19:34.351 "transport_tos": 0, 00:19:34.351 "nvme_error_stat": false, 00:19:34.351 "rdma_srq_size": 0, 00:19:34.351 "io_path_stat": false, 00:19:34.351 "allow_accel_sequence": false, 00:19:34.351 "rdma_max_cq_size": 0, 00:19:34.351 "rdma_cm_event_timeout_ms": 0, 00:19:34.351 "dhchap_digests": [ 00:19:34.351 "sha256", 00:19:34.351 "sha384", 00:19:34.351 "sha512" 00:19:34.351 ], 00:19:34.351 "dhchap_dhgroups": [ 00:19:34.351 "null", 00:19:34.351 "ffdhe2048", 00:19:34.351 "ffdhe3072", 00:19:34.351 "ffdhe4096", 00:19:34.351 "ffdhe6144", 00:19:34.351 "ffdhe8192" 00:19:34.351 ] 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "bdev_nvme_set_hotplug", 00:19:34.351 "params": { 00:19:34.351 "period_us": 100000, 00:19:34.351 "enable": false 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "bdev_malloc_create", 00:19:34.351 "params": { 00:19:34.351 "name": "malloc0", 00:19:34.351 "num_blocks": 8192, 00:19:34.351 "block_size": 4096, 00:19:34.351 "physical_block_size": 4096, 00:19:34.351 "uuid": "c9a30bd7-a2e8-41ac-9312-0adbbaa009ca", 00:19:34.351 "optimal_io_boundary": 0, 00:19:34.351 "md_size": 0, 00:19:34.351 "dif_type": 0, 00:19:34.351 "dif_is_head_of_md": false, 00:19:34.351 "dif_pi_format": 0 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "bdev_wait_for_examine" 00:19:34.351 } 00:19:34.351 ] 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "subsystem": "nbd", 00:19:34.351 "config": [] 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "subsystem": "scheduler", 00:19:34.351 "config": [ 00:19:34.351 { 00:19:34.351 "method": "framework_set_scheduler", 00:19:34.351 "params": { 00:19:34.351 "name": "static" 00:19:34.351 } 00:19:34.351 } 00:19:34.351 ] 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "subsystem": "nvmf", 00:19:34.351 "config": [ 00:19:34.351 { 00:19:34.351 "method": "nvmf_set_config", 00:19:34.351 "params": { 00:19:34.351 "discovery_filter": "match_any", 00:19:34.351 "admin_cmd_passthru": { 00:19:34.351 "identify_ctrlr": false 00:19:34.351 }, 00:19:34.351 "dhchap_digests": [ 00:19:34.351 "sha256", 00:19:34.351 "sha384", 00:19:34.351 "sha512" 00:19:34.351 ], 00:19:34.351 "dhchap_dhgroups": [ 00:19:34.351 "null", 00:19:34.351 "ffdhe2048", 00:19:34.351 "ffdhe3072", 00:19:34.351 "ffdhe4096", 00:19:34.351 "ffdhe6144", 00:19:34.351 "ffdhe8192" 00:19:34.351 ] 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "nvmf_set_max_subsystems", 00:19:34.351 "params": { 00:19:34.351 "max_subsystems": 1024 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "nvmf_set_crdt", 00:19:34.351 "params": { 00:19:34.351 "crdt1": 0, 00:19:34.351 "crdt2": 0, 00:19:34.351 "crdt3": 0 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "nvmf_create_transport", 00:19:34.351 "params": { 00:19:34.351 "trtype": "TCP", 00:19:34.351 "max_queue_depth": 128, 00:19:34.351 "max_io_qpairs_per_ctrlr": 127, 00:19:34.351 "in_capsule_data_size": 4096, 00:19:34.351 "max_io_size": 131072, 00:19:34.351 "io_unit_size": 131072, 00:19:34.351 "max_aq_depth": 128, 00:19:34.351 "num_shared_buffers": 511, 00:19:34.351 "buf_cache_size": 4294967295, 00:19:34.351 "dif_insert_or_strip": false, 00:19:34.351 "zcopy": false, 00:19:34.351 "c2h_success": false, 00:19:34.351 "sock_priority": 0, 00:19:34.351 "abort_timeout_sec": 1, 00:19:34.351 "ack_timeout": 0, 00:19:34.351 "data_wr_pool_size": 0 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "nvmf_create_subsystem", 00:19:34.351 "params": { 00:19:34.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.351 "allow_any_host": false, 00:19:34.351 "serial_number": "00000000000000000000", 00:19:34.351 "model_number": "SPDK bdev Controller", 00:19:34.351 "max_namespaces": 32, 00:19:34.351 "min_cntlid": 1, 00:19:34.351 "max_cntlid": 65519, 00:19:34.351 "ana_reporting": false 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "nvmf_subsystem_add_host", 00:19:34.351 "params": { 00:19:34.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.351 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.351 "psk": "key0" 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "nvmf_subsystem_add_ns", 00:19:34.351 "params": { 00:19:34.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.351 "namespace": { 00:19:34.351 "nsid": 1, 00:19:34.351 "bdev_name": "malloc0", 00:19:34.351 "nguid": "C9A30BD7A2E841AC93120ADBBAA009CA", 00:19:34.351 "uuid": "c9a30bd7-a2e8-41ac-9312-0adbbaa009ca", 00:19:34.351 "no_auto_visible": false 00:19:34.351 } 00:19:34.351 } 00:19:34.351 }, 00:19:34.351 { 00:19:34.351 "method": "nvmf_subsystem_add_listener", 00:19:34.351 "params": { 00:19:34.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.351 "listen_address": { 00:19:34.351 "trtype": "TCP", 00:19:34.351 "adrfam": "IPv4", 00:19:34.351 "traddr": "10.0.0.2", 00:19:34.351 "trsvcid": "4420" 00:19:34.351 }, 00:19:34.351 "secure_channel": false, 00:19:34.351 "sock_impl": "ssl" 00:19:34.351 } 00:19:34.351 } 00:19:34.351 ] 00:19:34.351 } 00:19:34.351 ] 00:19:34.351 }' 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=637821 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 637821 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 637821 ']' 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.351 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.351 [2024-10-30 12:31:06.959520] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:34.351 [2024-10-30 12:31:06.959614] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.351 [2024-10-30 12:31:07.030234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.610 [2024-10-30 12:31:07.081631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.610 [2024-10-30 12:31:07.081691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.610 [2024-10-30 12:31:07.081715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.610 [2024-10-30 12:31:07.081727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.610 [2024-10-30 12:31:07.081737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.610 [2024-10-30 12:31:07.082363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.868 [2024-10-30 12:31:07.327335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.868 [2024-10-30 12:31:07.359358] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.868 [2024-10-30 12:31:07.359591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=637974 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 637974 /var/tmp/bdevperf.sock 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 637974 ']' 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.436 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:35.436 "subsystems": [ 00:19:35.436 { 00:19:35.436 "subsystem": "keyring", 00:19:35.436 "config": [ 00:19:35.436 { 00:19:35.436 "method": "keyring_file_add_key", 00:19:35.436 "params": { 00:19:35.436 "name": "key0", 00:19:35.436 "path": "/tmp/tmp.9YvOSgRkpn" 00:19:35.436 } 00:19:35.436 } 00:19:35.436 ] 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "subsystem": "iobuf", 00:19:35.436 "config": [ 00:19:35.436 { 00:19:35.436 "method": "iobuf_set_options", 00:19:35.436 "params": { 00:19:35.436 "small_pool_count": 8192, 00:19:35.436 "large_pool_count": 1024, 00:19:35.436 "small_bufsize": 8192, 00:19:35.436 "large_bufsize": 135168, 00:19:35.436 "enable_numa": false 00:19:35.436 } 00:19:35.436 } 00:19:35.436 ] 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "subsystem": "sock", 00:19:35.436 "config": [ 00:19:35.436 { 00:19:35.436 "method": "sock_set_default_impl", 00:19:35.436 "params": { 00:19:35.436 "impl_name": "posix" 00:19:35.436 } 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "method": "sock_impl_set_options", 00:19:35.436 "params": { 00:19:35.436 "impl_name": "ssl", 00:19:35.436 "recv_buf_size": 4096, 00:19:35.436 "send_buf_size": 4096, 00:19:35.436 "enable_recv_pipe": true, 00:19:35.436 "enable_quickack": false, 00:19:35.436 "enable_placement_id": 0, 00:19:35.436 "enable_zerocopy_send_server": true, 00:19:35.436 "enable_zerocopy_send_client": false, 00:19:35.436 "zerocopy_threshold": 0, 00:19:35.436 "tls_version": 0, 00:19:35.436 "enable_ktls": false 00:19:35.436 } 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "method": "sock_impl_set_options", 00:19:35.436 "params": { 00:19:35.436 "impl_name": "posix", 00:19:35.436 "recv_buf_size": 2097152, 00:19:35.436 "send_buf_size": 2097152, 00:19:35.436 "enable_recv_pipe": true, 00:19:35.436 "enable_quickack": false, 00:19:35.436 "enable_placement_id": 0, 00:19:35.436 "enable_zerocopy_send_server": true, 00:19:35.436 "enable_zerocopy_send_client": false, 00:19:35.436 "zerocopy_threshold": 0, 00:19:35.436 "tls_version": 0, 00:19:35.436 "enable_ktls": false 00:19:35.436 } 00:19:35.436 } 00:19:35.436 ] 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "subsystem": "vmd", 00:19:35.436 "config": [] 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "subsystem": "accel", 00:19:35.436 "config": [ 00:19:35.436 { 00:19:35.436 "method": "accel_set_options", 00:19:35.436 "params": { 00:19:35.436 "small_cache_size": 128, 00:19:35.436 "large_cache_size": 16, 00:19:35.436 "task_count": 2048, 00:19:35.436 "sequence_count": 2048, 00:19:35.436 "buf_count": 2048 00:19:35.436 } 00:19:35.436 } 00:19:35.436 ] 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "subsystem": "bdev", 00:19:35.436 "config": [ 00:19:35.436 { 00:19:35.436 "method": "bdev_set_options", 00:19:35.436 "params": { 00:19:35.436 "bdev_io_pool_size": 65535, 00:19:35.436 "bdev_io_cache_size": 256, 00:19:35.436 "bdev_auto_examine": true, 00:19:35.436 "iobuf_small_cache_size": 128, 00:19:35.436 "iobuf_large_cache_size": 16 00:19:35.436 } 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "method": "bdev_raid_set_options", 00:19:35.436 "params": { 00:19:35.436 "process_window_size_kb": 1024, 00:19:35.436 "process_max_bandwidth_mb_sec": 0 00:19:35.436 } 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "method": "bdev_iscsi_set_options", 00:19:35.436 "params": { 00:19:35.436 "timeout_sec": 30 00:19:35.436 } 00:19:35.436 }, 00:19:35.436 { 00:19:35.436 "method": "bdev_nvme_set_options", 00:19:35.436 "params": { 00:19:35.436 "action_on_timeout": "none", 00:19:35.436 "timeout_us": 0, 00:19:35.436 "timeout_admin_us": 0, 00:19:35.436 "keep_alive_timeout_ms": 10000, 00:19:35.436 "arbitration_burst": 0, 00:19:35.436 "low_priority_weight": 0, 00:19:35.436 "medium_priority_weight": 0, 00:19:35.436 "high_priority_weight": 0, 00:19:35.436 "nvme_adminq_poll_period_us": 10000, 00:19:35.436 "nvme_ioq_poll_period_us": 0, 00:19:35.436 "io_queue_requests": 512, 00:19:35.436 "delay_cmd_submit": true, 00:19:35.436 "transport_retry_count": 4, 00:19:35.436 "bdev_retry_count": 3, 00:19:35.436 "transport_ack_timeout": 0, 00:19:35.436 "ctrlr_loss_timeout_sec": 0, 00:19:35.437 "reconnect_delay_sec": 0, 00:19:35.437 "fast_io_fail_timeout_sec": 0, 00:19:35.437 "disable_auto_failback": false, 00:19:35.437 "generate_uuids": false, 00:19:35.437 "transport_tos": 0, 00:19:35.437 "nvme_error_stat": false, 00:19:35.437 "rdma_srq_size": 0, 00:19:35.437 "io_path_stat": false, 00:19:35.437 "allow_accel_sequence": false, 00:19:35.437 "rdma_max_cq_size": 0, 00:19:35.437 "rdma_cm_event_timeout_ms": 0, 00:19:35.437 "dhchap_digests": [ 00:19:35.437 "sha256", 00:19:35.437 "sha384", 00:19:35.437 "sha512" 00:19:35.437 ], 00:19:35.437 "dhchap_dhgroups": [ 00:19:35.437 "null", 00:19:35.437 "ffdhe2048", 00:19:35.437 "ffdhe3072", 00:19:35.437 "ffdhe4096", 00:19:35.437 "ffdhe6144", 00:19:35.437 "ffdhe8192" 00:19:35.437 ] 00:19:35.437 } 00:19:35.437 }, 00:19:35.437 { 00:19:35.437 "method": "bdev_nvme_attach_controller", 00:19:35.437 "params": { 00:19:35.437 "name": "nvme0", 00:19:35.437 "trtype": "TCP", 00:19:35.437 "adrfam": "IPv4", 00:19:35.437 "traddr": "10.0.0.2", 00:19:35.437 "trsvcid": "4420", 00:19:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.437 "prchk_reftag": false, 00:19:35.437 "prchk_guard": false, 00:19:35.437 "ctrlr_loss_timeout_sec": 0, 00:19:35.437 "reconnect_delay_sec": 0, 00:19:35.437 "fast_io_fail_timeout_sec": 0, 00:19:35.437 "psk": "key0", 00:19:35.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.437 "hdgst": false, 00:19:35.437 "ddgst": false, 00:19:35.437 "multipath": "multipath" 00:19:35.437 } 00:19:35.437 }, 00:19:35.437 { 00:19:35.437 "method": "bdev_nvme_set_hotplug", 00:19:35.437 "params": { 00:19:35.437 "period_us": 100000, 00:19:35.437 "enable": false 00:19:35.437 } 00:19:35.437 }, 00:19:35.437 { 00:19:35.437 "method": "bdev_enable_histogram", 00:19:35.437 "params": { 00:19:35.437 "name": "nvme0n1", 00:19:35.437 "enable": true 00:19:35.437 } 00:19:35.437 }, 00:19:35.437 { 00:19:35.437 "method": "bdev_wait_for_examine" 00:19:35.437 } 00:19:35.437 ] 00:19:35.437 }, 00:19:35.437 { 00:19:35.437 "subsystem": "nbd", 00:19:35.437 "config": [] 00:19:35.437 } 00:19:35.437 ] 00:19:35.437 }' 00:19:35.437 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.437 [2024-10-30 12:31:08.036654] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:35.437 [2024-10-30 12:31:08.036726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637974 ] 00:19:35.437 [2024-10-30 12:31:08.100629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.695 [2024-10-30 12:31:08.158026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.695 [2024-10-30 12:31:08.329333] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.953 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.953 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:35.953 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:35.953 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:36.211 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.211 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:36.211 Running I/O for 1 seconds... 00:19:37.583 3350.00 IOPS, 13.09 MiB/s 00:19:37.583 Latency(us) 00:19:37.583 [2024-10-30T11:31:10.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.583 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:37.583 Verification LBA range: start 0x0 length 0x2000 00:19:37.583 nvme0n1 : 1.02 3405.89 13.30 0.00 0.00 37220.15 7670.14 33010.73 00:19:37.583 [2024-10-30T11:31:10.264Z] =================================================================================================================== 00:19:37.583 [2024-10-30T11:31:10.264Z] Total : 3405.89 13.30 0.00 0.00 37220.15 7670.14 33010.73 00:19:37.583 { 00:19:37.583 "results": [ 00:19:37.583 { 00:19:37.583 "job": "nvme0n1", 00:19:37.583 "core_mask": "0x2", 00:19:37.583 "workload": "verify", 00:19:37.583 "status": "finished", 00:19:37.583 "verify_range": { 00:19:37.583 "start": 0, 00:19:37.583 "length": 8192 00:19:37.583 }, 00:19:37.583 "queue_depth": 128, 00:19:37.583 "io_size": 4096, 00:19:37.583 "runtime": 1.021171, 00:19:37.583 "iops": 3405.8938218966264, 00:19:37.583 "mibps": 13.304272741783697, 00:19:37.583 "io_failed": 0, 00:19:37.583 "io_timeout": 0, 00:19:37.583 "avg_latency_us": 37220.15039720572, 00:19:37.583 "min_latency_us": 7670.139259259259, 00:19:37.583 "max_latency_us": 33010.72592592592 00:19:37.583 } 00:19:37.583 ], 00:19:37.583 "core_count": 1 00:19:37.583 } 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:37.583 nvmf_trace.0 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 637974 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 637974 ']' 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 637974 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 637974 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 637974' 00:19:37.583 killing process with pid 637974 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 637974 00:19:37.583 Received shutdown signal, test time was about 1.000000 seconds 00:19:37.583 00:19:37.583 Latency(us) 00:19:37.583 [2024-10-30T11:31:10.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.583 [2024-10-30T11:31:10.264Z] =================================================================================================================== 00:19:37.583 [2024-10-30T11:31:10.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.583 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 637974 00:19:37.583 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:37.583 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:37.583 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:37.583 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:37.583 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:37.583 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:37.583 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:37.583 rmmod nvme_tcp 00:19:37.583 rmmod nvme_fabrics 00:19:37.583 rmmod nvme_keyring 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 637821 ']' 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 637821 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 637821 ']' 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 637821 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 637821 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 637821' 00:19:37.840 killing process with pid 637821 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 637821 00:19:37.840 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 637821 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.099 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.007 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:40.008 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZobvmFTjVc /tmp/tmp.2lE7fT9E4u /tmp/tmp.9YvOSgRkpn 00:19:40.008 00:19:40.008 real 1m22.907s 00:19:40.008 user 2m16.858s 00:19:40.008 sys 0m25.680s 00:19:40.008 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:40.008 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.008 ************************************ 00:19:40.008 END TEST nvmf_tls 00:19:40.008 ************************************ 00:19:40.008 12:31:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:40.008 12:31:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:40.008 12:31:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:40.008 12:31:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:40.008 ************************************ 00:19:40.008 START TEST nvmf_fips 00:19:40.008 ************************************ 00:19:40.008 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:40.267 * Looking for test storage... 00:19:40.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:40.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.267 --rc genhtml_branch_coverage=1 00:19:40.267 --rc genhtml_function_coverage=1 00:19:40.267 --rc genhtml_legend=1 00:19:40.267 --rc geninfo_all_blocks=1 00:19:40.267 --rc geninfo_unexecuted_blocks=1 00:19:40.267 00:19:40.267 ' 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:40.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.267 --rc genhtml_branch_coverage=1 00:19:40.267 --rc genhtml_function_coverage=1 00:19:40.267 --rc genhtml_legend=1 00:19:40.267 --rc geninfo_all_blocks=1 00:19:40.267 --rc geninfo_unexecuted_blocks=1 00:19:40.267 00:19:40.267 ' 00:19:40.267 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:40.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.267 --rc genhtml_branch_coverage=1 00:19:40.267 --rc genhtml_function_coverage=1 00:19:40.267 --rc genhtml_legend=1 00:19:40.267 --rc geninfo_all_blocks=1 00:19:40.268 --rc geninfo_unexecuted_blocks=1 00:19:40.268 00:19:40.268 ' 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:40.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.268 --rc genhtml_branch_coverage=1 00:19:40.268 --rc genhtml_function_coverage=1 00:19:40.268 --rc genhtml_legend=1 00:19:40.268 --rc geninfo_all_blocks=1 00:19:40.268 --rc geninfo_unexecuted_blocks=1 00:19:40.268 00:19:40.268 ' 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:40.268 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:40.269 Error setting digest 00:19:40.269 407218082A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:40.269 407218082A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:40.269 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.527 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:42.431 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:42.431 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:42.431 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:42.431 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:42.431 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:42.432 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.432 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:42.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:19:42.432 00:19:42.432 --- 10.0.0.2 ping statistics --- 00:19:42.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.432 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:19:42.432 00:19:42.432 --- 10.0.0.1 ping statistics --- 00:19:42.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.432 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:42.432 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=640212 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 640212 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 640212 ']' 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:42.690 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:42.690 [2024-10-30 12:31:15.200226] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:42.690 [2024-10-30 12:31:15.200327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.690 [2024-10-30 12:31:15.273088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.690 [2024-10-30 12:31:15.332426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.690 [2024-10-30 12:31:15.332486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.690 [2024-10-30 12:31:15.332515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.690 [2024-10-30 12:31:15.332526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.690 [2024-10-30 12:31:15.332535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.690 [2024-10-30 12:31:15.333164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Vig 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Vig 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Vig 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Vig 00:19:42.948 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:43.207 [2024-10-30 12:31:15.789409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.207 [2024-10-30 12:31:15.805388] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:43.207 [2024-10-30 12:31:15.805646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.207 malloc0 00:19:43.207 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.207 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=640361 00:19:43.207 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.207 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 640361 /var/tmp/bdevperf.sock 00:19:43.207 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 640361 ']' 00:19:43.207 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.207 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:43.207 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.207 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:43.207 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:43.491 [2024-10-30 12:31:15.940535] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:43.491 [2024-10-30 12:31:15.940638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid640361 ] 00:19:43.491 [2024-10-30 12:31:16.005894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.491 [2024-10-30 12:31:16.063307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.491 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:43.491 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:43.491 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Vig 00:19:44.057 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:44.057 [2024-10-30 12:31:16.696221] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.315 TLSTESTn1 00:19:44.315 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.315 Running I/O for 10 seconds... 00:19:46.618 3340.00 IOPS, 13.05 MiB/s [2024-10-30T11:31:20.238Z] 3451.00 IOPS, 13.48 MiB/s [2024-10-30T11:31:21.172Z] 3504.67 IOPS, 13.69 MiB/s [2024-10-30T11:31:22.105Z] 3518.25 IOPS, 13.74 MiB/s [2024-10-30T11:31:23.053Z] 3534.00 IOPS, 13.80 MiB/s [2024-10-30T11:31:24.070Z] 3541.17 IOPS, 13.83 MiB/s [2024-10-30T11:31:25.002Z] 3536.14 IOPS, 13.81 MiB/s [2024-10-30T11:31:25.936Z] 3543.62 IOPS, 13.84 MiB/s [2024-10-30T11:31:27.306Z] 3544.33 IOPS, 13.85 MiB/s [2024-10-30T11:31:27.306Z] 3543.70 IOPS, 13.84 MiB/s 00:19:54.625 Latency(us) 00:19:54.625 [2024-10-30T11:31:27.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.625 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:54.625 Verification LBA range: start 0x0 length 0x2000 00:19:54.625 TLSTESTn1 : 10.02 3549.32 13.86 0.00 0.00 36002.52 7767.23 41166.32 00:19:54.625 [2024-10-30T11:31:27.306Z] =================================================================================================================== 00:19:54.625 [2024-10-30T11:31:27.306Z] Total : 3549.32 13.86 0.00 0.00 36002.52 7767.23 41166.32 00:19:54.625 { 00:19:54.625 "results": [ 00:19:54.625 { 00:19:54.625 "job": "TLSTESTn1", 00:19:54.625 "core_mask": "0x4", 00:19:54.625 "workload": "verify", 00:19:54.625 "status": "finished", 00:19:54.625 "verify_range": { 00:19:54.625 "start": 0, 00:19:54.625 "length": 8192 00:19:54.625 }, 00:19:54.625 "queue_depth": 128, 00:19:54.625 "io_size": 4096, 00:19:54.625 "runtime": 10.019956, 00:19:54.625 "iops": 3549.316983028668, 00:19:54.625 "mibps": 13.864519464955734, 00:19:54.625 "io_failed": 0, 00:19:54.625 "io_timeout": 0, 00:19:54.625 "avg_latency_us": 36002.524984316224, 00:19:54.625 "min_latency_us": 7767.22962962963, 00:19:54.625 "max_latency_us": 41166.317037037035 00:19:54.625 } 00:19:54.625 ], 00:19:54.625 "core_count": 1 00:19:54.625 } 00:19:54.625 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:54.625 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:54.625 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:19:54.625 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:19:54.625 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:54.625 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:54.625 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:54.625 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:54.625 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:54.625 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:54.625 nvmf_trace.0 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 640361 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 640361 ']' 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 640361 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 640361 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 640361' 00:19:54.625 killing process with pid 640361 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 640361 00:19:54.625 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.625 00:19:54.625 Latency(us) 00:19:54.625 [2024-10-30T11:31:27.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.625 [2024-10-30T11:31:27.306Z] =================================================================================================================== 00:19:54.625 [2024-10-30T11:31:27.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 640361 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:54.625 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:54.625 rmmod nvme_tcp 00:19:54.884 rmmod nvme_fabrics 00:19:54.884 rmmod nvme_keyring 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 640212 ']' 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 640212 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 640212 ']' 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 640212 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 640212 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 640212' 00:19:54.884 killing process with pid 640212 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 640212 00:19:54.884 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 640212 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:55.142 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.048 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:57.048 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Vig 00:19:57.048 00:19:57.048 real 0m17.027s 00:19:57.049 user 0m22.777s 00:19:57.049 sys 0m5.277s 00:19:57.049 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:57.049 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:57.049 ************************************ 00:19:57.049 END TEST nvmf_fips 00:19:57.049 ************************************ 00:19:57.049 12:31:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:57.049 12:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:57.049 12:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:57.049 12:31:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:57.306 ************************************ 00:19:57.306 START TEST nvmf_control_msg_list 00:19:57.306 ************************************ 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:57.306 * Looking for test storage... 00:19:57.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:57.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.306 --rc genhtml_branch_coverage=1 00:19:57.306 --rc genhtml_function_coverage=1 00:19:57.306 --rc genhtml_legend=1 00:19:57.306 --rc geninfo_all_blocks=1 00:19:57.306 --rc geninfo_unexecuted_blocks=1 00:19:57.306 00:19:57.306 ' 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:57.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.306 --rc genhtml_branch_coverage=1 00:19:57.306 --rc genhtml_function_coverage=1 00:19:57.306 --rc genhtml_legend=1 00:19:57.306 --rc geninfo_all_blocks=1 00:19:57.306 --rc geninfo_unexecuted_blocks=1 00:19:57.306 00:19:57.306 ' 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:57.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.306 --rc genhtml_branch_coverage=1 00:19:57.306 --rc genhtml_function_coverage=1 00:19:57.306 --rc genhtml_legend=1 00:19:57.306 --rc geninfo_all_blocks=1 00:19:57.306 --rc geninfo_unexecuted_blocks=1 00:19:57.306 00:19:57.306 ' 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:57.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.306 --rc genhtml_branch_coverage=1 00:19:57.306 --rc genhtml_function_coverage=1 00:19:57.306 --rc genhtml_legend=1 00:19:57.306 --rc geninfo_all_blocks=1 00:19:57.306 --rc geninfo_unexecuted_blocks=1 00:19:57.306 00:19:57.306 ' 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.306 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:57.307 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.832 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.832 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:59.832 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:59.832 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:59.832 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:59.832 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:59.832 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:59.832 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:59.832 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:59.832 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:59.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:19:59.833 00:19:59.833 --- 10.0.0.2 ping statistics --- 00:19:59.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.833 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:19:59.833 00:19:59.833 --- 10.0.0.1 ping statistics --- 00:19:59.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.833 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=643631 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 643631 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 643631 ']' 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.833 [2024-10-30 12:31:32.220338] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:19:59.833 [2024-10-30 12:31:32.220426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.833 [2024-10-30 12:31:32.290768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.833 [2024-10-30 12:31:32.342888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.833 [2024-10-30 12:31:32.342950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.833 [2024-10-30 12:31:32.342971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.833 [2024-10-30 12:31:32.342989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.833 [2024-10-30 12:31:32.343003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.833 [2024-10-30 12:31:32.343683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.833 [2024-10-30 12:31:32.473673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.833 Malloc0 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.833 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:59.833 [2024-10-30 12:31:32.513509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.089 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.089 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=643656 00:20:00.089 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.089 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=643657 00:20:00.089 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.089 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=643658 00:20:00.089 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.089 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 643656 00:20:00.089 [2024-10-30 12:31:32.592554] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:00.089 [2024-10-30 12:31:32.592928] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:00.089 [2024-10-30 12:31:32.593342] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:01.020 Initializing NVMe Controllers 00:20:01.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:01.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:01.020 Initialization complete. Launching workers. 00:20:01.020 ======================================================== 00:20:01.020 Latency(us) 00:20:01.020 Device Information : IOPS MiB/s Average min max 00:20:01.020 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3595.99 14.05 277.68 152.71 573.83 00:20:01.020 ======================================================== 00:20:01.020 Total : 3595.99 14.05 277.68 152.71 573.83 00:20:01.020 00:20:01.276 Initializing NVMe Controllers 00:20:01.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:01.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:01.276 Initialization complete. Launching workers. 00:20:01.276 ======================================================== 00:20:01.276 Latency(us) 00:20:01.276 Device Information : IOPS MiB/s Average min max 00:20:01.276 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3368.00 13.16 296.49 210.05 585.32 00:20:01.276 ======================================================== 00:20:01.276 Total : 3368.00 13.16 296.49 210.05 585.32 00:20:01.276 00:20:01.276 Initializing NVMe Controllers 00:20:01.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:01.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:01.276 Initialization complete. Launching workers. 00:20:01.276 ======================================================== 00:20:01.276 Latency(us) 00:20:01.276 Device Information : IOPS MiB/s Average min max 00:20:01.276 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3198.92 12.50 312.18 182.08 40710.89 00:20:01.276 ======================================================== 00:20:01.276 Total : 3198.92 12.50 312.18 182.08 40710.89 00:20:01.276 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 643657 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 643658 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:01.276 rmmod nvme_tcp 00:20:01.276 rmmod nvme_fabrics 00:20:01.276 rmmod nvme_keyring 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 643631 ']' 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 643631 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 643631 ']' 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 643631 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 643631 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 643631' 00:20:01.276 killing process with pid 643631 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 643631 00:20:01.276 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 643631 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.535 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:04.076 00:20:04.076 real 0m6.437s 00:20:04.076 user 0m5.660s 00:20:04.076 sys 0m2.771s 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:04.076 ************************************ 00:20:04.076 END TEST nvmf_control_msg_list 00:20:04.076 ************************************ 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:04.076 ************************************ 00:20:04.076 START TEST nvmf_wait_for_buf 00:20:04.076 ************************************ 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:04.076 * Looking for test storage... 00:20:04.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:04.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.076 --rc genhtml_branch_coverage=1 00:20:04.076 --rc genhtml_function_coverage=1 00:20:04.076 --rc genhtml_legend=1 00:20:04.076 --rc geninfo_all_blocks=1 00:20:04.076 --rc geninfo_unexecuted_blocks=1 00:20:04.076 00:20:04.076 ' 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:04.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.076 --rc genhtml_branch_coverage=1 00:20:04.076 --rc genhtml_function_coverage=1 00:20:04.076 --rc genhtml_legend=1 00:20:04.076 --rc geninfo_all_blocks=1 00:20:04.076 --rc geninfo_unexecuted_blocks=1 00:20:04.076 00:20:04.076 ' 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:04.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.076 --rc genhtml_branch_coverage=1 00:20:04.076 --rc genhtml_function_coverage=1 00:20:04.076 --rc genhtml_legend=1 00:20:04.076 --rc geninfo_all_blocks=1 00:20:04.076 --rc geninfo_unexecuted_blocks=1 00:20:04.076 00:20:04.076 ' 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:04.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.076 --rc genhtml_branch_coverage=1 00:20:04.076 --rc genhtml_function_coverage=1 00:20:04.076 --rc genhtml_legend=1 00:20:04.076 --rc geninfo_all_blocks=1 00:20:04.076 --rc geninfo_unexecuted_blocks=1 00:20:04.076 00:20:04.076 ' 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.076 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:04.077 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:05.981 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:05.982 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:05.982 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:05.982 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:05.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:05.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:20:05.982 00:20:05.982 --- 10.0.0.2 ping statistics --- 00:20:05.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.982 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:20:05.982 00:20:05.982 --- 10.0.0.1 ping statistics --- 00:20:05.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.982 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=645852 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 645852 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 645852 ']' 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:05.982 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.240 [2024-10-30 12:31:38.680209] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:20:06.240 [2024-10-30 12:31:38.680317] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.240 [2024-10-30 12:31:38.754041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.241 [2024-10-30 12:31:38.814393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.241 [2024-10-30 12:31:38.814455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.241 [2024-10-30 12:31:38.814477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.241 [2024-10-30 12:31:38.814495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.241 [2024-10-30 12:31:38.814511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.241 [2024-10-30 12:31:38.815216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.241 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:06.241 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:20:06.241 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.241 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:06.241 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.499 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.499 Malloc0 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.499 [2024-10-30 12:31:39.064718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.499 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:06.500 [2024-10-30 12:31:39.088906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.500 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.500 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:06.500 [2024-10-30 12:31:39.168372] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:07.873 Initializing NVMe Controllers 00:20:07.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:07.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:07.873 Initialization complete. Launching workers. 00:20:07.873 ======================================================== 00:20:07.873 Latency(us) 00:20:07.873 Device Information : IOPS MiB/s Average min max 00:20:07.873 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 28.00 3.50 146918.30 47869.77 191541.17 00:20:07.873 ======================================================== 00:20:07.873 Total : 28.00 3.50 146918.30 47869.77 191541.17 00:20:07.873 00:20:07.873 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:07.873 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:07.873 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.873 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=422 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 422 -eq 0 ]] 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:08.131 rmmod nvme_tcp 00:20:08.131 rmmod nvme_fabrics 00:20:08.131 rmmod nvme_keyring 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 645852 ']' 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 645852 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 645852 ']' 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 645852 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:20:08.131 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:08.132 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 645852 00:20:08.132 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:08.132 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:08.132 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 645852' 00:20:08.132 killing process with pid 645852 00:20:08.132 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 645852 00:20:08.132 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 645852 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:08.390 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.296 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:10.296 00:20:10.296 real 0m6.711s 00:20:10.296 user 0m3.147s 00:20:10.296 sys 0m2.019s 00:20:10.296 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:10.296 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:10.296 ************************************ 00:20:10.296 END TEST nvmf_wait_for_buf 00:20:10.296 ************************************ 00:20:10.296 12:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:10.296 12:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:10.296 12:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:10.296 12:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:10.296 12:31:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:10.296 12:31:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.827 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.827 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:12.827 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:12.828 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:12.828 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:12.828 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:12.828 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.828 ************************************ 00:20:12.828 START TEST nvmf_perf_adq 00:20:12.828 ************************************ 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:12.828 * Looking for test storage... 00:20:12.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:12.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.828 --rc genhtml_branch_coverage=1 00:20:12.828 --rc genhtml_function_coverage=1 00:20:12.828 --rc genhtml_legend=1 00:20:12.828 --rc geninfo_all_blocks=1 00:20:12.828 --rc geninfo_unexecuted_blocks=1 00:20:12.828 00:20:12.828 ' 00:20:12.828 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:12.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.828 --rc genhtml_branch_coverage=1 00:20:12.828 --rc genhtml_function_coverage=1 00:20:12.828 --rc genhtml_legend=1 00:20:12.828 --rc geninfo_all_blocks=1 00:20:12.828 --rc geninfo_unexecuted_blocks=1 00:20:12.828 00:20:12.828 ' 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:12.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.829 --rc genhtml_branch_coverage=1 00:20:12.829 --rc genhtml_function_coverage=1 00:20:12.829 --rc genhtml_legend=1 00:20:12.829 --rc geninfo_all_blocks=1 00:20:12.829 --rc geninfo_unexecuted_blocks=1 00:20:12.829 00:20:12.829 ' 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:12.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.829 --rc genhtml_branch_coverage=1 00:20:12.829 --rc genhtml_function_coverage=1 00:20:12.829 --rc genhtml_legend=1 00:20:12.829 --rc geninfo_all_blocks=1 00:20:12.829 --rc geninfo_unexecuted_blocks=1 00:20:12.829 00:20:12.829 ' 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:12.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:12.829 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:14.733 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:14.733 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:14.733 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:14.733 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:14.733 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:15.671 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:18.201 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.475 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:23.476 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:23.476 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:23.476 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:23.476 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:23.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:20:23.476 00:20:23.476 --- 10.0.0.2 ping statistics --- 00:20:23.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.476 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:20:23.476 00:20:23.476 --- 10.0.0.1 ping statistics --- 00:20:23.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.476 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=650578 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 650578 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 650578 ']' 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:23.476 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.476 [2024-10-30 12:31:55.528849] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:20:23.477 [2024-10-30 12:31:55.528942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.477 [2024-10-30 12:31:55.601885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:23.477 [2024-10-30 12:31:55.660962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.477 [2024-10-30 12:31:55.661010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.477 [2024-10-30 12:31:55.661034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.477 [2024-10-30 12:31:55.661046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.477 [2024-10-30 12:31:55.661056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.477 [2024-10-30 12:31:55.662743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.477 [2024-10-30 12:31:55.662808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.477 [2024-10-30 12:31:55.662830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.477 [2024-10-30 12:31:55.662835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.477 [2024-10-30 12:31:55.934960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.477 Malloc1 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.477 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.477 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.477 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.477 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.477 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.477 [2024-10-30 12:31:56.006035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.477 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.477 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=650727 00:20:23.477 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:23.477 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:25.376 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:25.376 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.376 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.376 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.376 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:25.376 "tick_rate": 2700000000, 00:20:25.376 "poll_groups": [ 00:20:25.376 { 00:20:25.376 "name": "nvmf_tgt_poll_group_000", 00:20:25.376 "admin_qpairs": 1, 00:20:25.376 "io_qpairs": 1, 00:20:25.376 "current_admin_qpairs": 1, 00:20:25.376 "current_io_qpairs": 1, 00:20:25.376 "pending_bdev_io": 0, 00:20:25.376 "completed_nvme_io": 19942, 00:20:25.376 "transports": [ 00:20:25.376 { 00:20:25.376 "trtype": "TCP" 00:20:25.376 } 00:20:25.376 ] 00:20:25.376 }, 00:20:25.376 { 00:20:25.376 "name": "nvmf_tgt_poll_group_001", 00:20:25.376 "admin_qpairs": 0, 00:20:25.376 "io_qpairs": 1, 00:20:25.376 "current_admin_qpairs": 0, 00:20:25.376 "current_io_qpairs": 1, 00:20:25.376 "pending_bdev_io": 0, 00:20:25.376 "completed_nvme_io": 19276, 00:20:25.376 "transports": [ 00:20:25.376 { 00:20:25.376 "trtype": "TCP" 00:20:25.376 } 00:20:25.376 ] 00:20:25.376 }, 00:20:25.376 { 00:20:25.376 "name": "nvmf_tgt_poll_group_002", 00:20:25.376 "admin_qpairs": 0, 00:20:25.376 "io_qpairs": 1, 00:20:25.376 "current_admin_qpairs": 0, 00:20:25.376 "current_io_qpairs": 1, 00:20:25.376 "pending_bdev_io": 0, 00:20:25.376 "completed_nvme_io": 19922, 00:20:25.376 "transports": [ 00:20:25.376 { 00:20:25.376 "trtype": "TCP" 00:20:25.376 } 00:20:25.376 ] 00:20:25.376 }, 00:20:25.376 { 00:20:25.376 "name": "nvmf_tgt_poll_group_003", 00:20:25.376 "admin_qpairs": 0, 00:20:25.376 "io_qpairs": 1, 00:20:25.376 "current_admin_qpairs": 0, 00:20:25.376 "current_io_qpairs": 1, 00:20:25.376 "pending_bdev_io": 0, 00:20:25.376 "completed_nvme_io": 19691, 00:20:25.376 "transports": [ 00:20:25.376 { 00:20:25.376 "trtype": "TCP" 00:20:25.376 } 00:20:25.376 ] 00:20:25.376 } 00:20:25.376 ] 00:20:25.376 }' 00:20:25.376 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:25.376 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:25.634 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:25.634 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:25.634 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 650727 00:20:33.736 Initializing NVMe Controllers 00:20:33.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:33.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:33.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:33.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:33.736 Initialization complete. Launching workers. 00:20:33.736 ======================================================== 00:20:33.736 Latency(us) 00:20:33.736 Device Information : IOPS MiB/s Average min max 00:20:33.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10346.40 40.42 6186.56 2898.24 9721.63 00:20:33.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10186.00 39.79 6283.93 2405.00 11022.43 00:20:33.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10402.80 40.64 6152.67 2437.79 10170.38 00:20:33.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10508.40 41.05 6092.12 2281.99 10188.57 00:20:33.737 ======================================================== 00:20:33.737 Total : 41443.59 161.89 6178.04 2281.99 11022.43 00:20:33.737 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.737 rmmod nvme_tcp 00:20:33.737 rmmod nvme_fabrics 00:20:33.737 rmmod nvme_keyring 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 650578 ']' 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 650578 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 650578 ']' 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 650578 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 650578 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 650578' 00:20:33.737 killing process with pid 650578 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 650578 00:20:33.737 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 650578 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.997 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.996 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.996 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:35.996 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:35.996 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:36.931 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:39.457 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.735 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:44.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:44.736 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:44.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:44.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:44.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:20:44.736 00:20:44.736 --- 10.0.0.2 ping statistics --- 00:20:44.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.736 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:20:44.736 00:20:44.736 --- 10.0.0.1 ping statistics --- 00:20:44.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.736 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:44.736 net.core.busy_poll = 1 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:44.736 net.core.busy_read = 1 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.736 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.737 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=653351 00:20:44.737 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:44.737 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 653351 00:20:44.737 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 653351 ']' 00:20:44.737 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.737 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:44.737 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.737 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:44.737 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.737 [2024-10-30 12:32:17.027570] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:20:44.737 [2024-10-30 12:32:17.027676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.737 [2024-10-30 12:32:17.100422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.737 [2024-10-30 12:32:17.161324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.737 [2024-10-30 12:32:17.161380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.737 [2024-10-30 12:32:17.161408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.737 [2024-10-30 12:32:17.161419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.737 [2024-10-30 12:32:17.161429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.737 [2024-10-30 12:32:17.162968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.737 [2024-10-30 12:32:17.163032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.737 [2024-10-30 12:32:17.163101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.737 [2024-10-30 12:32:17.163105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.737 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.995 [2024-10-30 12:32:17.430034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.995 Malloc1 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.995 [2024-10-30 12:32:17.498882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=653506 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:44.995 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:46.894 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:46.894 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.894 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.894 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.894 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:46.894 "tick_rate": 2700000000, 00:20:46.894 "poll_groups": [ 00:20:46.894 { 00:20:46.894 "name": "nvmf_tgt_poll_group_000", 00:20:46.894 "admin_qpairs": 1, 00:20:46.894 "io_qpairs": 3, 00:20:46.894 "current_admin_qpairs": 1, 00:20:46.894 "current_io_qpairs": 3, 00:20:46.894 "pending_bdev_io": 0, 00:20:46.894 "completed_nvme_io": 26088, 00:20:46.894 "transports": [ 00:20:46.894 { 00:20:46.894 "trtype": "TCP" 00:20:46.894 } 00:20:46.894 ] 00:20:46.894 }, 00:20:46.894 { 00:20:46.894 "name": "nvmf_tgt_poll_group_001", 00:20:46.894 "admin_qpairs": 0, 00:20:46.894 "io_qpairs": 1, 00:20:46.894 "current_admin_qpairs": 0, 00:20:46.894 "current_io_qpairs": 1, 00:20:46.894 "pending_bdev_io": 0, 00:20:46.894 "completed_nvme_io": 25578, 00:20:46.894 "transports": [ 00:20:46.894 { 00:20:46.894 "trtype": "TCP" 00:20:46.894 } 00:20:46.894 ] 00:20:46.894 }, 00:20:46.894 { 00:20:46.894 "name": "nvmf_tgt_poll_group_002", 00:20:46.894 "admin_qpairs": 0, 00:20:46.894 "io_qpairs": 0, 00:20:46.894 "current_admin_qpairs": 0, 00:20:46.894 "current_io_qpairs": 0, 00:20:46.894 "pending_bdev_io": 0, 00:20:46.894 "completed_nvme_io": 0, 00:20:46.894 "transports": [ 00:20:46.894 { 00:20:46.894 "trtype": "TCP" 00:20:46.894 } 00:20:46.894 ] 00:20:46.894 }, 00:20:46.894 { 00:20:46.894 "name": "nvmf_tgt_poll_group_003", 00:20:46.894 "admin_qpairs": 0, 00:20:46.894 "io_qpairs": 0, 00:20:46.894 "current_admin_qpairs": 0, 00:20:46.894 "current_io_qpairs": 0, 00:20:46.894 "pending_bdev_io": 0, 00:20:46.894 "completed_nvme_io": 0, 00:20:46.894 "transports": [ 00:20:46.894 { 00:20:46.894 "trtype": "TCP" 00:20:46.894 } 00:20:46.894 ] 00:20:46.894 } 00:20:46.894 ] 00:20:46.894 }' 00:20:46.894 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:46.894 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:46.894 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:46.894 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:46.894 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 653506 00:20:55.008 Initializing NVMe Controllers 00:20:55.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:55.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:55.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:55.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:55.008 Initialization complete. Launching workers. 00:20:55.008 ======================================================== 00:20:55.008 Latency(us) 00:20:55.008 Device Information : IOPS MiB/s Average min max 00:20:55.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4665.80 18.23 13739.02 2097.81 61527.93 00:20:55.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3710.20 14.49 17308.71 2140.53 61935.99 00:20:55.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13440.90 52.50 4761.22 1633.14 7220.52 00:20:55.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5431.00 21.21 11785.20 1778.05 60769.69 00:20:55.008 ======================================================== 00:20:55.008 Total : 27247.89 106.44 9407.06 1633.14 61935.99 00:20:55.008 00:20:55.008 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:55.008 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.008 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:55.008 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.008 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:55.008 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.008 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.008 rmmod nvme_tcp 00:20:55.267 rmmod nvme_fabrics 00:20:55.267 rmmod nvme_keyring 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 653351 ']' 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 653351 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 653351 ']' 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 653351 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 653351 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 653351' 00:20:55.267 killing process with pid 653351 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 653351 00:20:55.267 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 653351 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.528 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.432 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:57.432 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:57.432 00:20:57.432 real 0m44.966s 00:20:57.432 user 2m40.207s 00:20:57.432 sys 0m9.283s 00:20:57.432 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:57.432 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:57.432 ************************************ 00:20:57.432 END TEST nvmf_perf_adq 00:20:57.432 ************************************ 00:20:57.432 12:32:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:57.432 12:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:57.432 12:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:57.432 12:32:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:57.692 ************************************ 00:20:57.692 START TEST nvmf_shutdown 00:20:57.692 ************************************ 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:57.692 * Looking for test storage... 00:20:57.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:57.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.692 --rc genhtml_branch_coverage=1 00:20:57.692 --rc genhtml_function_coverage=1 00:20:57.692 --rc genhtml_legend=1 00:20:57.692 --rc geninfo_all_blocks=1 00:20:57.692 --rc geninfo_unexecuted_blocks=1 00:20:57.692 00:20:57.692 ' 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:57.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.692 --rc genhtml_branch_coverage=1 00:20:57.692 --rc genhtml_function_coverage=1 00:20:57.692 --rc genhtml_legend=1 00:20:57.692 --rc geninfo_all_blocks=1 00:20:57.692 --rc geninfo_unexecuted_blocks=1 00:20:57.692 00:20:57.692 ' 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:57.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.692 --rc genhtml_branch_coverage=1 00:20:57.692 --rc genhtml_function_coverage=1 00:20:57.692 --rc genhtml_legend=1 00:20:57.692 --rc geninfo_all_blocks=1 00:20:57.692 --rc geninfo_unexecuted_blocks=1 00:20:57.692 00:20:57.692 ' 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:57.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.692 --rc genhtml_branch_coverage=1 00:20:57.692 --rc genhtml_function_coverage=1 00:20:57.692 --rc genhtml_legend=1 00:20:57.692 --rc geninfo_all_blocks=1 00:20:57.692 --rc geninfo_unexecuted_blocks=1 00:20:57.692 00:20:57.692 ' 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.692 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:57.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:57.693 ************************************ 00:20:57.693 START TEST nvmf_shutdown_tc1 00:20:57.693 ************************************ 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:57.693 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:00.227 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:00.227 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:00.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:00.227 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.227 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:21:00.228 00:21:00.228 --- 10.0.0.2 ping statistics --- 00:21:00.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.228 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:21:00.228 00:21:00.228 --- 10.0.0.1 ping statistics --- 00:21:00.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.228 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=656677 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 656677 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 656677 ']' 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:00.228 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.228 [2024-10-30 12:32:32.710173] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:00.228 [2024-10-30 12:32:32.710266] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.228 [2024-10-30 12:32:32.783093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.228 [2024-10-30 12:32:32.841074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.228 [2024-10-30 12:32:32.841125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.228 [2024-10-30 12:32:32.841154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.228 [2024-10-30 12:32:32.841167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.228 [2024-10-30 12:32:32.841177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.228 [2024-10-30 12:32:32.842792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.228 [2024-10-30 12:32:32.842855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.228 [2024-10-30 12:32:32.842876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:00.228 [2024-10-30 12:32:32.842879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.486 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:00.486 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:21:00.486 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.486 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.486 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.486 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.486 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.486 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.486 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.486 [2024-10-30 12:32:32.996416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.486 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.486 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:00.486 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:00.486 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.487 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.487 Malloc1 00:21:00.487 [2024-10-30 12:32:33.095695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.487 Malloc2 00:21:00.487 Malloc3 00:21:00.744 Malloc4 00:21:00.745 Malloc5 00:21:00.745 Malloc6 00:21:00.745 Malloc7 00:21:00.745 Malloc8 00:21:01.003 Malloc9 00:21:01.003 Malloc10 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=656855 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 656855 /var/tmp/bdevperf.sock 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 656855 ']' 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.003 { 00:21:01.003 "params": { 00:21:01.003 "name": "Nvme$subsystem", 00:21:01.003 "trtype": "$TEST_TRANSPORT", 00:21:01.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.003 "adrfam": "ipv4", 00:21:01.003 "trsvcid": "$NVMF_PORT", 00:21:01.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.003 "hdgst": ${hdgst:-false}, 00:21:01.003 "ddgst": ${ddgst:-false} 00:21:01.003 }, 00:21:01.003 "method": "bdev_nvme_attach_controller" 00:21:01.003 } 00:21:01.003 EOF 00:21:01.003 )") 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.003 { 00:21:01.003 "params": { 00:21:01.003 "name": "Nvme$subsystem", 00:21:01.003 "trtype": "$TEST_TRANSPORT", 00:21:01.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.003 "adrfam": "ipv4", 00:21:01.003 "trsvcid": "$NVMF_PORT", 00:21:01.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.003 "hdgst": ${hdgst:-false}, 00:21:01.003 "ddgst": ${ddgst:-false} 00:21:01.003 }, 00:21:01.003 "method": "bdev_nvme_attach_controller" 00:21:01.003 } 00:21:01.003 EOF 00:21:01.003 )") 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.003 { 00:21:01.003 "params": { 00:21:01.003 "name": "Nvme$subsystem", 00:21:01.003 "trtype": "$TEST_TRANSPORT", 00:21:01.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.003 "adrfam": "ipv4", 00:21:01.003 "trsvcid": "$NVMF_PORT", 00:21:01.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.003 "hdgst": ${hdgst:-false}, 00:21:01.003 "ddgst": ${ddgst:-false} 00:21:01.003 }, 00:21:01.003 "method": "bdev_nvme_attach_controller" 00:21:01.003 } 00:21:01.003 EOF 00:21:01.003 )") 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.003 { 00:21:01.003 "params": { 00:21:01.003 "name": "Nvme$subsystem", 00:21:01.003 "trtype": "$TEST_TRANSPORT", 00:21:01.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.003 "adrfam": "ipv4", 00:21:01.003 "trsvcid": "$NVMF_PORT", 00:21:01.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.003 "hdgst": ${hdgst:-false}, 00:21:01.003 "ddgst": ${ddgst:-false} 00:21:01.003 }, 00:21:01.003 "method": "bdev_nvme_attach_controller" 00:21:01.003 } 00:21:01.003 EOF 00:21:01.003 )") 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.003 { 00:21:01.003 "params": { 00:21:01.003 "name": "Nvme$subsystem", 00:21:01.003 "trtype": "$TEST_TRANSPORT", 00:21:01.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.003 "adrfam": "ipv4", 00:21:01.003 "trsvcid": "$NVMF_PORT", 00:21:01.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.003 "hdgst": ${hdgst:-false}, 00:21:01.003 "ddgst": ${ddgst:-false} 00:21:01.003 }, 00:21:01.003 "method": "bdev_nvme_attach_controller" 00:21:01.003 } 00:21:01.003 EOF 00:21:01.003 )") 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.003 { 00:21:01.003 "params": { 00:21:01.003 "name": "Nvme$subsystem", 00:21:01.003 "trtype": "$TEST_TRANSPORT", 00:21:01.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.003 "adrfam": "ipv4", 00:21:01.003 "trsvcid": "$NVMF_PORT", 00:21:01.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.003 "hdgst": ${hdgst:-false}, 00:21:01.003 "ddgst": ${ddgst:-false} 00:21:01.003 }, 00:21:01.003 "method": "bdev_nvme_attach_controller" 00:21:01.003 } 00:21:01.003 EOF 00:21:01.003 )") 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.003 { 00:21:01.003 "params": { 00:21:01.003 "name": "Nvme$subsystem", 00:21:01.003 "trtype": "$TEST_TRANSPORT", 00:21:01.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.003 "adrfam": "ipv4", 00:21:01.003 "trsvcid": "$NVMF_PORT", 00:21:01.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.003 "hdgst": ${hdgst:-false}, 00:21:01.003 "ddgst": ${ddgst:-false} 00:21:01.003 }, 00:21:01.003 "method": "bdev_nvme_attach_controller" 00:21:01.003 } 00:21:01.003 EOF 00:21:01.003 )") 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.003 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.003 { 00:21:01.003 "params": { 00:21:01.003 "name": "Nvme$subsystem", 00:21:01.003 "trtype": "$TEST_TRANSPORT", 00:21:01.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.003 "adrfam": "ipv4", 00:21:01.003 "trsvcid": "$NVMF_PORT", 00:21:01.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.004 "hdgst": ${hdgst:-false}, 00:21:01.004 "ddgst": ${ddgst:-false} 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 } 00:21:01.004 EOF 00:21:01.004 )") 00:21:01.004 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.004 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.004 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.004 { 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme$subsystem", 00:21:01.004 "trtype": "$TEST_TRANSPORT", 00:21:01.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "$NVMF_PORT", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.004 "hdgst": ${hdgst:-false}, 00:21:01.004 "ddgst": ${ddgst:-false} 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 } 00:21:01.004 EOF 00:21:01.004 )") 00:21:01.004 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.004 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.004 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.004 { 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme$subsystem", 00:21:01.004 "trtype": "$TEST_TRANSPORT", 00:21:01.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "$NVMF_PORT", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.004 "hdgst": ${hdgst:-false}, 00:21:01.004 "ddgst": ${ddgst:-false} 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 } 00:21:01.004 EOF 00:21:01.004 )") 00:21:01.004 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.004 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:01.004 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:01.004 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme1", 00:21:01.004 "trtype": "tcp", 00:21:01.004 "traddr": "10.0.0.2", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "4420", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.004 "hdgst": false, 00:21:01.004 "ddgst": false 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 },{ 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme2", 00:21:01.004 "trtype": "tcp", 00:21:01.004 "traddr": "10.0.0.2", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "4420", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:01.004 "hdgst": false, 00:21:01.004 "ddgst": false 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 },{ 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme3", 00:21:01.004 "trtype": "tcp", 00:21:01.004 "traddr": "10.0.0.2", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "4420", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:01.004 "hdgst": false, 00:21:01.004 "ddgst": false 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 },{ 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme4", 00:21:01.004 "trtype": "tcp", 00:21:01.004 "traddr": "10.0.0.2", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "4420", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:01.004 "hdgst": false, 00:21:01.004 "ddgst": false 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 },{ 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme5", 00:21:01.004 "trtype": "tcp", 00:21:01.004 "traddr": "10.0.0.2", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "4420", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:01.004 "hdgst": false, 00:21:01.004 "ddgst": false 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 },{ 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme6", 00:21:01.004 "trtype": "tcp", 00:21:01.004 "traddr": "10.0.0.2", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "4420", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:01.004 "hdgst": false, 00:21:01.004 "ddgst": false 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 },{ 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme7", 00:21:01.004 "trtype": "tcp", 00:21:01.004 "traddr": "10.0.0.2", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "4420", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:01.004 "hdgst": false, 00:21:01.004 "ddgst": false 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 },{ 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme8", 00:21:01.004 "trtype": "tcp", 00:21:01.004 "traddr": "10.0.0.2", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "4420", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:01.004 "hdgst": false, 00:21:01.004 "ddgst": false 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 },{ 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme9", 00:21:01.004 "trtype": "tcp", 00:21:01.004 "traddr": "10.0.0.2", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "4420", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:01.004 "hdgst": false, 00:21:01.004 "ddgst": false 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 },{ 00:21:01.004 "params": { 00:21:01.004 "name": "Nvme10", 00:21:01.004 "trtype": "tcp", 00:21:01.004 "traddr": "10.0.0.2", 00:21:01.004 "adrfam": "ipv4", 00:21:01.004 "trsvcid": "4420", 00:21:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:01.004 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:01.004 "hdgst": false, 00:21:01.004 "ddgst": false 00:21:01.004 }, 00:21:01.004 "method": "bdev_nvme_attach_controller" 00:21:01.004 }' 00:21:01.004 [2024-10-30 12:32:33.614865] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:01.004 [2024-10-30 12:32:33.614934] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:01.262 [2024-10-30 12:32:33.686429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.262 [2024-10-30 12:32:33.745544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.159 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:03.159 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:21:03.159 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:03.159 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.159 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.159 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.159 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 656855 00:21:03.159 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:03.159 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:04.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 656855 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 656677 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.092 { 00:21:04.092 "params": { 00:21:04.092 "name": "Nvme$subsystem", 00:21:04.092 "trtype": "$TEST_TRANSPORT", 00:21:04.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.092 "adrfam": "ipv4", 00:21:04.092 "trsvcid": "$NVMF_PORT", 00:21:04.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.092 "hdgst": ${hdgst:-false}, 00:21:04.092 "ddgst": ${ddgst:-false} 00:21:04.092 }, 00:21:04.092 "method": "bdev_nvme_attach_controller" 00:21:04.092 } 00:21:04.092 EOF 00:21:04.092 )") 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.092 { 00:21:04.092 "params": { 00:21:04.092 "name": "Nvme$subsystem", 00:21:04.092 "trtype": "$TEST_TRANSPORT", 00:21:04.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.092 "adrfam": "ipv4", 00:21:04.092 "trsvcid": "$NVMF_PORT", 00:21:04.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.092 "hdgst": ${hdgst:-false}, 00:21:04.092 "ddgst": ${ddgst:-false} 00:21:04.092 }, 00:21:04.092 "method": "bdev_nvme_attach_controller" 00:21:04.092 } 00:21:04.092 EOF 00:21:04.092 )") 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.092 { 00:21:04.092 "params": { 00:21:04.092 "name": "Nvme$subsystem", 00:21:04.092 "trtype": "$TEST_TRANSPORT", 00:21:04.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.092 "adrfam": "ipv4", 00:21:04.092 "trsvcid": "$NVMF_PORT", 00:21:04.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.092 "hdgst": ${hdgst:-false}, 00:21:04.092 "ddgst": ${ddgst:-false} 00:21:04.092 }, 00:21:04.092 "method": "bdev_nvme_attach_controller" 00:21:04.092 } 00:21:04.092 EOF 00:21:04.092 )") 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.092 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.092 { 00:21:04.092 "params": { 00:21:04.092 "name": "Nvme$subsystem", 00:21:04.092 "trtype": "$TEST_TRANSPORT", 00:21:04.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "$NVMF_PORT", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.093 "hdgst": ${hdgst:-false}, 00:21:04.093 "ddgst": ${ddgst:-false} 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 } 00:21:04.093 EOF 00:21:04.093 )") 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.093 { 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme$subsystem", 00:21:04.093 "trtype": "$TEST_TRANSPORT", 00:21:04.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "$NVMF_PORT", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.093 "hdgst": ${hdgst:-false}, 00:21:04.093 "ddgst": ${ddgst:-false} 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 } 00:21:04.093 EOF 00:21:04.093 )") 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.093 { 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme$subsystem", 00:21:04.093 "trtype": "$TEST_TRANSPORT", 00:21:04.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "$NVMF_PORT", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.093 "hdgst": ${hdgst:-false}, 00:21:04.093 "ddgst": ${ddgst:-false} 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 } 00:21:04.093 EOF 00:21:04.093 )") 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.093 { 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme$subsystem", 00:21:04.093 "trtype": "$TEST_TRANSPORT", 00:21:04.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "$NVMF_PORT", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.093 "hdgst": ${hdgst:-false}, 00:21:04.093 "ddgst": ${ddgst:-false} 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 } 00:21:04.093 EOF 00:21:04.093 )") 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.093 { 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme$subsystem", 00:21:04.093 "trtype": "$TEST_TRANSPORT", 00:21:04.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "$NVMF_PORT", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.093 "hdgst": ${hdgst:-false}, 00:21:04.093 "ddgst": ${ddgst:-false} 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 } 00:21:04.093 EOF 00:21:04.093 )") 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.093 { 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme$subsystem", 00:21:04.093 "trtype": "$TEST_TRANSPORT", 00:21:04.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "$NVMF_PORT", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.093 "hdgst": ${hdgst:-false}, 00:21:04.093 "ddgst": ${ddgst:-false} 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 } 00:21:04.093 EOF 00:21:04.093 )") 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:04.093 { 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme$subsystem", 00:21:04.093 "trtype": "$TEST_TRANSPORT", 00:21:04.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "$NVMF_PORT", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.093 "hdgst": ${hdgst:-false}, 00:21:04.093 "ddgst": ${ddgst:-false} 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 } 00:21:04.093 EOF 00:21:04.093 )") 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:04.093 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme1", 00:21:04.093 "trtype": "tcp", 00:21:04.093 "traddr": "10.0.0.2", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "4420", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.093 "hdgst": false, 00:21:04.093 "ddgst": false 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 },{ 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme2", 00:21:04.093 "trtype": "tcp", 00:21:04.093 "traddr": "10.0.0.2", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "4420", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:04.093 "hdgst": false, 00:21:04.093 "ddgst": false 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 },{ 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme3", 00:21:04.093 "trtype": "tcp", 00:21:04.093 "traddr": "10.0.0.2", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "4420", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:04.093 "hdgst": false, 00:21:04.093 "ddgst": false 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 },{ 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme4", 00:21:04.093 "trtype": "tcp", 00:21:04.093 "traddr": "10.0.0.2", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "4420", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:04.093 "hdgst": false, 00:21:04.093 "ddgst": false 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 },{ 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme5", 00:21:04.093 "trtype": "tcp", 00:21:04.093 "traddr": "10.0.0.2", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "4420", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:04.093 "hdgst": false, 00:21:04.093 "ddgst": false 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 },{ 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme6", 00:21:04.093 "trtype": "tcp", 00:21:04.093 "traddr": "10.0.0.2", 00:21:04.093 "adrfam": "ipv4", 00:21:04.093 "trsvcid": "4420", 00:21:04.093 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:04.093 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:04.093 "hdgst": false, 00:21:04.093 "ddgst": false 00:21:04.093 }, 00:21:04.093 "method": "bdev_nvme_attach_controller" 00:21:04.093 },{ 00:21:04.093 "params": { 00:21:04.093 "name": "Nvme7", 00:21:04.093 "trtype": "tcp", 00:21:04.094 "traddr": "10.0.0.2", 00:21:04.094 "adrfam": "ipv4", 00:21:04.094 "trsvcid": "4420", 00:21:04.094 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:04.094 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:04.094 "hdgst": false, 00:21:04.094 "ddgst": false 00:21:04.094 }, 00:21:04.094 "method": "bdev_nvme_attach_controller" 00:21:04.094 },{ 00:21:04.094 "params": { 00:21:04.094 "name": "Nvme8", 00:21:04.094 "trtype": "tcp", 00:21:04.094 "traddr": "10.0.0.2", 00:21:04.094 "adrfam": "ipv4", 00:21:04.094 "trsvcid": "4420", 00:21:04.094 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:04.094 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:04.094 "hdgst": false, 00:21:04.094 "ddgst": false 00:21:04.094 }, 00:21:04.094 "method": "bdev_nvme_attach_controller" 00:21:04.094 },{ 00:21:04.094 "params": { 00:21:04.094 "name": "Nvme9", 00:21:04.094 "trtype": "tcp", 00:21:04.094 "traddr": "10.0.0.2", 00:21:04.094 "adrfam": "ipv4", 00:21:04.094 "trsvcid": "4420", 00:21:04.094 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:04.094 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:04.094 "hdgst": false, 00:21:04.094 "ddgst": false 00:21:04.094 }, 00:21:04.094 "method": "bdev_nvme_attach_controller" 00:21:04.094 },{ 00:21:04.094 "params": { 00:21:04.094 "name": "Nvme10", 00:21:04.094 "trtype": "tcp", 00:21:04.094 "traddr": "10.0.0.2", 00:21:04.094 "adrfam": "ipv4", 00:21:04.094 "trsvcid": "4420", 00:21:04.094 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:04.094 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:04.094 "hdgst": false, 00:21:04.094 "ddgst": false 00:21:04.094 }, 00:21:04.094 "method": "bdev_nvme_attach_controller" 00:21:04.094 }' 00:21:04.094 [2024-10-30 12:32:36.682299] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:04.094 [2024-10-30 12:32:36.682388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657272 ] 00:21:04.094 [2024-10-30 12:32:36.755731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.352 [2024-10-30 12:32:36.817886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.725 Running I/O for 1 seconds... 00:21:06.916 1737.00 IOPS, 108.56 MiB/s 00:21:06.916 Latency(us) 00:21:06.916 [2024-10-30T11:32:39.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.916 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.916 Verification LBA range: start 0x0 length 0x400 00:21:06.916 Nvme1n1 : 1.15 223.44 13.97 0.00 0.00 283392.76 19903.53 256318.58 00:21:06.916 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.916 Verification LBA range: start 0x0 length 0x400 00:21:06.916 Nvme2n1 : 1.04 184.75 11.55 0.00 0.00 336688.42 22233.69 279620.27 00:21:06.916 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.916 Verification LBA range: start 0x0 length 0x400 00:21:06.916 Nvme3n1 : 1.10 233.17 14.57 0.00 0.00 261772.33 18641.35 254765.13 00:21:06.916 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.916 Verification LBA range: start 0x0 length 0x400 00:21:06.916 Nvme4n1 : 1.15 222.57 13.91 0.00 0.00 270793.96 18932.62 248551.35 00:21:06.916 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.916 Verification LBA range: start 0x0 length 0x400 00:21:06.916 Nvme5n1 : 1.13 236.39 14.77 0.00 0.00 243038.93 13204.29 243891.01 00:21:06.916 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.916 Verification LBA range: start 0x0 length 0x400 00:21:06.916 Nvme6n1 : 1.14 223.95 14.00 0.00 0.00 260234.62 19806.44 273406.48 00:21:06.916 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.916 Verification LBA range: start 0x0 length 0x400 00:21:06.916 Nvme7n1 : 1.16 221.19 13.82 0.00 0.00 259338.62 17282.09 270299.59 00:21:06.916 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.916 Verification LBA range: start 0x0 length 0x400 00:21:06.916 Nvme8n1 : 1.17 277.85 17.37 0.00 0.00 202655.42 2208.81 262532.36 00:21:06.916 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.916 Verification LBA range: start 0x0 length 0x400 00:21:06.916 Nvme9n1 : 1.16 220.22 13.76 0.00 0.00 251692.94 21651.15 267192.70 00:21:06.916 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.916 Verification LBA range: start 0x0 length 0x400 00:21:06.916 Nvme10n1 : 1.17 222.98 13.94 0.00 0.00 244351.33 2451.53 288940.94 00:21:06.916 [2024-10-30T11:32:39.597Z] =================================================================================================================== 00:21:06.916 [2024-10-30T11:32:39.597Z] Total : 2266.50 141.66 0.00 0.00 257846.81 2208.81 288940.94 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.179 rmmod nvme_tcp 00:21:07.179 rmmod nvme_fabrics 00:21:07.179 rmmod nvme_keyring 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 656677 ']' 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 656677 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 656677 ']' 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 656677 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 656677 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 656677' 00:21:07.179 killing process with pid 656677 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 656677 00:21:07.179 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 656677 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.746 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.652 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:09.652 00:21:09.652 real 0m12.017s 00:21:09.652 user 0m34.481s 00:21:09.652 sys 0m3.396s 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:09.911 ************************************ 00:21:09.911 END TEST nvmf_shutdown_tc1 00:21:09.911 ************************************ 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:09.911 ************************************ 00:21:09.911 START TEST nvmf_shutdown_tc2 00:21:09.911 ************************************ 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:09.911 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.911 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:09.912 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:09.912 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:09.912 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:09.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:09.912 00:21:09.912 --- 10.0.0.2 ping statistics --- 00:21:09.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.912 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:21:09.912 00:21:09.912 --- 10.0.0.1 ping statistics --- 00:21:09.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.912 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.912 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=658038 00:21:09.913 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:09.913 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 658038 00:21:09.913 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 658038 ']' 00:21:09.913 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.913 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:09.913 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.913 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:09.913 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.171 [2024-10-30 12:32:42.626269] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:10.171 [2024-10-30 12:32:42.626356] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.171 [2024-10-30 12:32:42.704776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.171 [2024-10-30 12:32:42.762570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.171 [2024-10-30 12:32:42.762630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.171 [2024-10-30 12:32:42.762643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.171 [2024-10-30 12:32:42.762655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.171 [2024-10-30 12:32:42.762665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.171 [2024-10-30 12:32:42.764157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.171 [2024-10-30 12:32:42.764223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.171 [2024-10-30 12:32:42.764292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:10.171 [2024-10-30 12:32:42.764297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.428 [2024-10-30 12:32:42.904111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:10.428 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.429 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.429 Malloc1 00:21:10.429 [2024-10-30 12:32:42.996858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.429 Malloc2 00:21:10.429 Malloc3 00:21:10.686 Malloc4 00:21:10.686 Malloc5 00:21:10.686 Malloc6 00:21:10.686 Malloc7 00:21:10.686 Malloc8 00:21:10.686 Malloc9 00:21:10.944 Malloc10 00:21:10.944 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.944 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:10.944 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:10.944 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.944 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=658214 00:21:10.944 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 658214 /var/tmp/bdevperf.sock 00:21:10.944 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 658214 ']' 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.945 { 00:21:10.945 "params": { 00:21:10.945 "name": "Nvme$subsystem", 00:21:10.945 "trtype": "$TEST_TRANSPORT", 00:21:10.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.945 "adrfam": "ipv4", 00:21:10.945 "trsvcid": "$NVMF_PORT", 00:21:10.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.945 "hdgst": ${hdgst:-false}, 00:21:10.945 "ddgst": ${ddgst:-false} 00:21:10.945 }, 00:21:10.945 "method": "bdev_nvme_attach_controller" 00:21:10.945 } 00:21:10.945 EOF 00:21:10.945 )") 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.945 { 00:21:10.945 "params": { 00:21:10.945 "name": "Nvme$subsystem", 00:21:10.945 "trtype": "$TEST_TRANSPORT", 00:21:10.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.945 "adrfam": "ipv4", 00:21:10.945 "trsvcid": "$NVMF_PORT", 00:21:10.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.945 "hdgst": ${hdgst:-false}, 00:21:10.945 "ddgst": ${ddgst:-false} 00:21:10.945 }, 00:21:10.945 "method": "bdev_nvme_attach_controller" 00:21:10.945 } 00:21:10.945 EOF 00:21:10.945 )") 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.945 { 00:21:10.945 "params": { 00:21:10.945 "name": "Nvme$subsystem", 00:21:10.945 "trtype": "$TEST_TRANSPORT", 00:21:10.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.945 "adrfam": "ipv4", 00:21:10.945 "trsvcid": "$NVMF_PORT", 00:21:10.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.945 "hdgst": ${hdgst:-false}, 00:21:10.945 "ddgst": ${ddgst:-false} 00:21:10.945 }, 00:21:10.945 "method": "bdev_nvme_attach_controller" 00:21:10.945 } 00:21:10.945 EOF 00:21:10.945 )") 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.945 { 00:21:10.945 "params": { 00:21:10.945 "name": "Nvme$subsystem", 00:21:10.945 "trtype": "$TEST_TRANSPORT", 00:21:10.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.945 "adrfam": "ipv4", 00:21:10.945 "trsvcid": "$NVMF_PORT", 00:21:10.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.945 "hdgst": ${hdgst:-false}, 00:21:10.945 "ddgst": ${ddgst:-false} 00:21:10.945 }, 00:21:10.945 "method": "bdev_nvme_attach_controller" 00:21:10.945 } 00:21:10.945 EOF 00:21:10.945 )") 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.945 { 00:21:10.945 "params": { 00:21:10.945 "name": "Nvme$subsystem", 00:21:10.945 "trtype": "$TEST_TRANSPORT", 00:21:10.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.945 "adrfam": "ipv4", 00:21:10.945 "trsvcid": "$NVMF_PORT", 00:21:10.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.945 "hdgst": ${hdgst:-false}, 00:21:10.945 "ddgst": ${ddgst:-false} 00:21:10.945 }, 00:21:10.945 "method": "bdev_nvme_attach_controller" 00:21:10.945 } 00:21:10.945 EOF 00:21:10.945 )") 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.945 { 00:21:10.945 "params": { 00:21:10.945 "name": "Nvme$subsystem", 00:21:10.945 "trtype": "$TEST_TRANSPORT", 00:21:10.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.945 "adrfam": "ipv4", 00:21:10.945 "trsvcid": "$NVMF_PORT", 00:21:10.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.945 "hdgst": ${hdgst:-false}, 00:21:10.945 "ddgst": ${ddgst:-false} 00:21:10.945 }, 00:21:10.945 "method": "bdev_nvme_attach_controller" 00:21:10.945 } 00:21:10.945 EOF 00:21:10.945 )") 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.945 { 00:21:10.945 "params": { 00:21:10.945 "name": "Nvme$subsystem", 00:21:10.945 "trtype": "$TEST_TRANSPORT", 00:21:10.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.945 "adrfam": "ipv4", 00:21:10.945 "trsvcid": "$NVMF_PORT", 00:21:10.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.945 "hdgst": ${hdgst:-false}, 00:21:10.945 "ddgst": ${ddgst:-false} 00:21:10.945 }, 00:21:10.945 "method": "bdev_nvme_attach_controller" 00:21:10.945 } 00:21:10.945 EOF 00:21:10.945 )") 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.945 { 00:21:10.945 "params": { 00:21:10.945 "name": "Nvme$subsystem", 00:21:10.945 "trtype": "$TEST_TRANSPORT", 00:21:10.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.945 "adrfam": "ipv4", 00:21:10.945 "trsvcid": "$NVMF_PORT", 00:21:10.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.945 "hdgst": ${hdgst:-false}, 00:21:10.945 "ddgst": ${ddgst:-false} 00:21:10.945 }, 00:21:10.945 "method": "bdev_nvme_attach_controller" 00:21:10.945 } 00:21:10.945 EOF 00:21:10.945 )") 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.945 { 00:21:10.945 "params": { 00:21:10.945 "name": "Nvme$subsystem", 00:21:10.945 "trtype": "$TEST_TRANSPORT", 00:21:10.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.945 "adrfam": "ipv4", 00:21:10.945 "trsvcid": "$NVMF_PORT", 00:21:10.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.945 "hdgst": ${hdgst:-false}, 00:21:10.945 "ddgst": ${ddgst:-false} 00:21:10.945 }, 00:21:10.945 "method": "bdev_nvme_attach_controller" 00:21:10.945 } 00:21:10.945 EOF 00:21:10.945 )") 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.945 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:10.946 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:10.946 { 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme$subsystem", 00:21:10.946 "trtype": "$TEST_TRANSPORT", 00:21:10.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "$NVMF_PORT", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.946 "hdgst": ${hdgst:-false}, 00:21:10.946 "ddgst": ${ddgst:-false} 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 } 00:21:10.946 EOF 00:21:10.946 )") 00:21:10.946 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:10.946 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:10.946 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:10.946 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme1", 00:21:10.946 "trtype": "tcp", 00:21:10.946 "traddr": "10.0.0.2", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "4420", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.946 "hdgst": false, 00:21:10.946 "ddgst": false 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 },{ 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme2", 00:21:10.946 "trtype": "tcp", 00:21:10.946 "traddr": "10.0.0.2", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "4420", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:10.946 "hdgst": false, 00:21:10.946 "ddgst": false 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 },{ 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme3", 00:21:10.946 "trtype": "tcp", 00:21:10.946 "traddr": "10.0.0.2", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "4420", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:10.946 "hdgst": false, 00:21:10.946 "ddgst": false 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 },{ 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme4", 00:21:10.946 "trtype": "tcp", 00:21:10.946 "traddr": "10.0.0.2", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "4420", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:10.946 "hdgst": false, 00:21:10.946 "ddgst": false 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 },{ 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme5", 00:21:10.946 "trtype": "tcp", 00:21:10.946 "traddr": "10.0.0.2", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "4420", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:10.946 "hdgst": false, 00:21:10.946 "ddgst": false 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 },{ 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme6", 00:21:10.946 "trtype": "tcp", 00:21:10.946 "traddr": "10.0.0.2", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "4420", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:10.946 "hdgst": false, 00:21:10.946 "ddgst": false 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 },{ 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme7", 00:21:10.946 "trtype": "tcp", 00:21:10.946 "traddr": "10.0.0.2", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "4420", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:10.946 "hdgst": false, 00:21:10.946 "ddgst": false 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 },{ 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme8", 00:21:10.946 "trtype": "tcp", 00:21:10.946 "traddr": "10.0.0.2", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "4420", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:10.946 "hdgst": false, 00:21:10.946 "ddgst": false 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 },{ 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme9", 00:21:10.946 "trtype": "tcp", 00:21:10.946 "traddr": "10.0.0.2", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "4420", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:10.946 "hdgst": false, 00:21:10.946 "ddgst": false 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 },{ 00:21:10.946 "params": { 00:21:10.946 "name": "Nvme10", 00:21:10.946 "trtype": "tcp", 00:21:10.946 "traddr": "10.0.0.2", 00:21:10.946 "adrfam": "ipv4", 00:21:10.946 "trsvcid": "4420", 00:21:10.946 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:10.946 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:10.946 "hdgst": false, 00:21:10.946 "ddgst": false 00:21:10.946 }, 00:21:10.946 "method": "bdev_nvme_attach_controller" 00:21:10.946 }' 00:21:10.946 [2024-10-30 12:32:43.509747] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:10.946 [2024-10-30 12:32:43.509834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658214 ] 00:21:10.946 [2024-10-30 12:32:43.581948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.203 [2024-10-30 12:32:43.642908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.573 Running I/O for 10 seconds... 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=23 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 23 -ge 100 ']' 00:21:13.138 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:13.396 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:13.396 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:13.396 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:13.396 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 658214 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 658214 ']' 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 658214 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 658214 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 658214' 00:21:13.397 killing process with pid 658214 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 658214 00:21:13.397 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 658214 00:21:13.397 Received shutdown signal, test time was about 0.820324 seconds 00:21:13.397 00:21:13.397 Latency(us) 00:21:13.397 [2024-10-30T11:32:46.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.397 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.397 Verification LBA range: start 0x0 length 0x400 00:21:13.397 Nvme1n1 : 0.81 238.03 14.88 0.00 0.00 265239.77 29903.83 240784.12 00:21:13.397 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.397 Verification LBA range: start 0x0 length 0x400 00:21:13.397 Nvme2n1 : 0.79 242.19 15.14 0.00 0.00 254315.08 19612.25 253211.69 00:21:13.397 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.397 Verification LBA range: start 0x0 length 0x400 00:21:13.397 Nvme3n1 : 0.78 250.27 15.64 0.00 0.00 238688.01 4393.34 248551.35 00:21:13.397 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.397 Verification LBA range: start 0x0 length 0x400 00:21:13.397 Nvme4n1 : 0.79 244.32 15.27 0.00 0.00 239268.60 23107.51 236123.78 00:21:13.397 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.397 Verification LBA range: start 0x0 length 0x400 00:21:13.397 Nvme5n1 : 0.80 240.49 15.03 0.00 0.00 236560.94 20874.43 253211.69 00:21:13.397 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.397 Verification LBA range: start 0x0 length 0x400 00:21:13.397 Nvme6n1 : 0.81 236.63 14.79 0.00 0.00 236361.96 20291.89 256318.58 00:21:13.397 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.397 Verification LBA range: start 0x0 length 0x400 00:21:13.397 Nvme7n1 : 0.80 240.97 15.06 0.00 0.00 225442.32 28738.75 229910.00 00:21:13.397 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.397 Verification LBA range: start 0x0 length 0x400 00:21:13.397 Nvme8n1 : 0.82 235.28 14.70 0.00 0.00 225751.04 21068.61 267192.70 00:21:13.397 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.397 Verification LBA range: start 0x0 length 0x400 00:21:13.397 Nvme9n1 : 0.82 234.29 14.64 0.00 0.00 220898.04 21845.33 264085.81 00:21:13.397 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.397 Verification LBA range: start 0x0 length 0x400 00:21:13.397 Nvme10n1 : 0.77 166.16 10.38 0.00 0.00 298456.56 21554.06 284280.60 00:21:13.397 [2024-10-30T11:32:46.078Z] =================================================================================================================== 00:21:13.397 [2024-10-30T11:32:46.078Z] Total : 2328.62 145.54 0.00 0.00 242216.20 4393.34 284280.60 00:21:13.655 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 658038 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:15.027 rmmod nvme_tcp 00:21:15.027 rmmod nvme_fabrics 00:21:15.027 rmmod nvme_keyring 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 658038 ']' 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 658038 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 658038 ']' 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 658038 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 658038 00:21:15.027 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:15.028 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:15.028 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 658038' 00:21:15.028 killing process with pid 658038 00:21:15.028 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 658038 00:21:15.028 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 658038 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.287 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:17.890 00:21:17.890 real 0m7.555s 00:21:17.890 user 0m22.973s 00:21:17.890 sys 0m1.389s 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:17.890 ************************************ 00:21:17.890 END TEST nvmf_shutdown_tc2 00:21:17.890 ************************************ 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:17.890 ************************************ 00:21:17.890 START TEST nvmf_shutdown_tc3 00:21:17.890 ************************************ 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.890 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.890 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:17.891 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:17.891 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:17.891 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:17.891 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:17.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:21:17.891 00:21:17.891 --- 10.0.0.2 ping statistics --- 00:21:17.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.891 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:17.891 00:21:17.891 --- 10.0.0.1 ping statistics --- 00:21:17.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.891 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=659130 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 659130 00:21:17.891 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 659130 ']' 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.892 [2024-10-30 12:32:50.229581] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:17.892 [2024-10-30 12:32:50.229673] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.892 [2024-10-30 12:32:50.304514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.892 [2024-10-30 12:32:50.364466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.892 [2024-10-30 12:32:50.364526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.892 [2024-10-30 12:32:50.364540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.892 [2024-10-30 12:32:50.364550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.892 [2024-10-30 12:32:50.364559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.892 [2024-10-30 12:32:50.366056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.892 [2024-10-30 12:32:50.366174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.892 [2024-10-30 12:32:50.366292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:17.892 [2024-10-30 12:32:50.366297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.892 [2024-10-30 12:32:50.516907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.892 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:18.152 Malloc1 00:21:18.152 [2024-10-30 12:32:50.621834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.152 Malloc2 00:21:18.152 Malloc3 00:21:18.152 Malloc4 00:21:18.152 Malloc5 00:21:18.410 Malloc6 00:21:18.410 Malloc7 00:21:18.410 Malloc8 00:21:18.410 Malloc9 00:21:18.410 Malloc10 00:21:18.410 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.410 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:18.410 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:18.410 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:18.669 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=659195 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 659195 /var/tmp/bdevperf.sock 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 659195 ']' 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.670 { 00:21:18.670 "params": { 00:21:18.670 "name": "Nvme$subsystem", 00:21:18.670 "trtype": "$TEST_TRANSPORT", 00:21:18.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.670 "adrfam": "ipv4", 00:21:18.670 "trsvcid": "$NVMF_PORT", 00:21:18.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.670 "hdgst": ${hdgst:-false}, 00:21:18.670 "ddgst": ${ddgst:-false} 00:21:18.670 }, 00:21:18.670 "method": "bdev_nvme_attach_controller" 00:21:18.670 } 00:21:18.670 EOF 00:21:18.670 )") 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.670 { 00:21:18.670 "params": { 00:21:18.670 "name": "Nvme$subsystem", 00:21:18.670 "trtype": "$TEST_TRANSPORT", 00:21:18.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.670 "adrfam": "ipv4", 00:21:18.670 "trsvcid": "$NVMF_PORT", 00:21:18.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.670 "hdgst": ${hdgst:-false}, 00:21:18.670 "ddgst": ${ddgst:-false} 00:21:18.670 }, 00:21:18.670 "method": "bdev_nvme_attach_controller" 00:21:18.670 } 00:21:18.670 EOF 00:21:18.670 )") 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.670 { 00:21:18.670 "params": { 00:21:18.670 "name": "Nvme$subsystem", 00:21:18.670 "trtype": "$TEST_TRANSPORT", 00:21:18.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.670 "adrfam": "ipv4", 00:21:18.670 "trsvcid": "$NVMF_PORT", 00:21:18.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.670 "hdgst": ${hdgst:-false}, 00:21:18.670 "ddgst": ${ddgst:-false} 00:21:18.670 }, 00:21:18.670 "method": "bdev_nvme_attach_controller" 00:21:18.670 } 00:21:18.670 EOF 00:21:18.670 )") 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.670 { 00:21:18.670 "params": { 00:21:18.670 "name": "Nvme$subsystem", 00:21:18.670 "trtype": "$TEST_TRANSPORT", 00:21:18.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.670 "adrfam": "ipv4", 00:21:18.670 "trsvcid": "$NVMF_PORT", 00:21:18.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.670 "hdgst": ${hdgst:-false}, 00:21:18.670 "ddgst": ${ddgst:-false} 00:21:18.670 }, 00:21:18.670 "method": "bdev_nvme_attach_controller" 00:21:18.670 } 00:21:18.670 EOF 00:21:18.670 )") 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.670 { 00:21:18.670 "params": { 00:21:18.670 "name": "Nvme$subsystem", 00:21:18.670 "trtype": "$TEST_TRANSPORT", 00:21:18.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.670 "adrfam": "ipv4", 00:21:18.670 "trsvcid": "$NVMF_PORT", 00:21:18.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.670 "hdgst": ${hdgst:-false}, 00:21:18.670 "ddgst": ${ddgst:-false} 00:21:18.670 }, 00:21:18.670 "method": "bdev_nvme_attach_controller" 00:21:18.670 } 00:21:18.670 EOF 00:21:18.670 )") 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.670 { 00:21:18.670 "params": { 00:21:18.670 "name": "Nvme$subsystem", 00:21:18.670 "trtype": "$TEST_TRANSPORT", 00:21:18.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.670 "adrfam": "ipv4", 00:21:18.670 "trsvcid": "$NVMF_PORT", 00:21:18.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.670 "hdgst": ${hdgst:-false}, 00:21:18.670 "ddgst": ${ddgst:-false} 00:21:18.670 }, 00:21:18.670 "method": "bdev_nvme_attach_controller" 00:21:18.670 } 00:21:18.670 EOF 00:21:18.670 )") 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.670 { 00:21:18.670 "params": { 00:21:18.670 "name": "Nvme$subsystem", 00:21:18.670 "trtype": "$TEST_TRANSPORT", 00:21:18.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.670 "adrfam": "ipv4", 00:21:18.670 "trsvcid": "$NVMF_PORT", 00:21:18.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.670 "hdgst": ${hdgst:-false}, 00:21:18.670 "ddgst": ${ddgst:-false} 00:21:18.670 }, 00:21:18.670 "method": "bdev_nvme_attach_controller" 00:21:18.670 } 00:21:18.670 EOF 00:21:18.670 )") 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.670 { 00:21:18.670 "params": { 00:21:18.670 "name": "Nvme$subsystem", 00:21:18.670 "trtype": "$TEST_TRANSPORT", 00:21:18.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.670 "adrfam": "ipv4", 00:21:18.670 "trsvcid": "$NVMF_PORT", 00:21:18.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.670 "hdgst": ${hdgst:-false}, 00:21:18.670 "ddgst": ${ddgst:-false} 00:21:18.670 }, 00:21:18.670 "method": "bdev_nvme_attach_controller" 00:21:18.670 } 00:21:18.670 EOF 00:21:18.670 )") 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.670 { 00:21:18.670 "params": { 00:21:18.670 "name": "Nvme$subsystem", 00:21:18.670 "trtype": "$TEST_TRANSPORT", 00:21:18.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.670 "adrfam": "ipv4", 00:21:18.670 "trsvcid": "$NVMF_PORT", 00:21:18.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.670 "hdgst": ${hdgst:-false}, 00:21:18.670 "ddgst": ${ddgst:-false} 00:21:18.670 }, 00:21:18.670 "method": "bdev_nvme_attach_controller" 00:21:18.670 } 00:21:18.670 EOF 00:21:18.670 )") 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:18.670 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:18.670 { 00:21:18.670 "params": { 00:21:18.670 "name": "Nvme$subsystem", 00:21:18.670 "trtype": "$TEST_TRANSPORT", 00:21:18.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.670 "adrfam": "ipv4", 00:21:18.670 "trsvcid": "$NVMF_PORT", 00:21:18.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.670 "hdgst": ${hdgst:-false}, 00:21:18.670 "ddgst": ${ddgst:-false} 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 } 00:21:18.671 EOF 00:21:18.671 )") 00:21:18.671 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:18.671 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:18.671 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:18.671 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:18.671 "params": { 00:21:18.671 "name": "Nvme1", 00:21:18.671 "trtype": "tcp", 00:21:18.671 "traddr": "10.0.0.2", 00:21:18.671 "adrfam": "ipv4", 00:21:18.671 "trsvcid": "4420", 00:21:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.671 "hdgst": false, 00:21:18.671 "ddgst": false 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 },{ 00:21:18.671 "params": { 00:21:18.671 "name": "Nvme2", 00:21:18.671 "trtype": "tcp", 00:21:18.671 "traddr": "10.0.0.2", 00:21:18.671 "adrfam": "ipv4", 00:21:18.671 "trsvcid": "4420", 00:21:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:18.671 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:18.671 "hdgst": false, 00:21:18.671 "ddgst": false 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 },{ 00:21:18.671 "params": { 00:21:18.671 "name": "Nvme3", 00:21:18.671 "trtype": "tcp", 00:21:18.671 "traddr": "10.0.0.2", 00:21:18.671 "adrfam": "ipv4", 00:21:18.671 "trsvcid": "4420", 00:21:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:18.671 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:18.671 "hdgst": false, 00:21:18.671 "ddgst": false 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 },{ 00:21:18.671 "params": { 00:21:18.671 "name": "Nvme4", 00:21:18.671 "trtype": "tcp", 00:21:18.671 "traddr": "10.0.0.2", 00:21:18.671 "adrfam": "ipv4", 00:21:18.671 "trsvcid": "4420", 00:21:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:18.671 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:18.671 "hdgst": false, 00:21:18.671 "ddgst": false 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 },{ 00:21:18.671 "params": { 00:21:18.671 "name": "Nvme5", 00:21:18.671 "trtype": "tcp", 00:21:18.671 "traddr": "10.0.0.2", 00:21:18.671 "adrfam": "ipv4", 00:21:18.671 "trsvcid": "4420", 00:21:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:18.671 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:18.671 "hdgst": false, 00:21:18.671 "ddgst": false 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 },{ 00:21:18.671 "params": { 00:21:18.671 "name": "Nvme6", 00:21:18.671 "trtype": "tcp", 00:21:18.671 "traddr": "10.0.0.2", 00:21:18.671 "adrfam": "ipv4", 00:21:18.671 "trsvcid": "4420", 00:21:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:18.671 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:18.671 "hdgst": false, 00:21:18.671 "ddgst": false 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 },{ 00:21:18.671 "params": { 00:21:18.671 "name": "Nvme7", 00:21:18.671 "trtype": "tcp", 00:21:18.671 "traddr": "10.0.0.2", 00:21:18.671 "adrfam": "ipv4", 00:21:18.671 "trsvcid": "4420", 00:21:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:18.671 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:18.671 "hdgst": false, 00:21:18.671 "ddgst": false 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 },{ 00:21:18.671 "params": { 00:21:18.671 "name": "Nvme8", 00:21:18.671 "trtype": "tcp", 00:21:18.671 "traddr": "10.0.0.2", 00:21:18.671 "adrfam": "ipv4", 00:21:18.671 "trsvcid": "4420", 00:21:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:18.671 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:18.671 "hdgst": false, 00:21:18.671 "ddgst": false 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 },{ 00:21:18.671 "params": { 00:21:18.671 "name": "Nvme9", 00:21:18.671 "trtype": "tcp", 00:21:18.671 "traddr": "10.0.0.2", 00:21:18.671 "adrfam": "ipv4", 00:21:18.671 "trsvcid": "4420", 00:21:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:18.671 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:18.671 "hdgst": false, 00:21:18.671 "ddgst": false 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 },{ 00:21:18.671 "params": { 00:21:18.671 "name": "Nvme10", 00:21:18.671 "trtype": "tcp", 00:21:18.671 "traddr": "10.0.0.2", 00:21:18.671 "adrfam": "ipv4", 00:21:18.671 "trsvcid": "4420", 00:21:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:18.671 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:18.671 "hdgst": false, 00:21:18.671 "ddgst": false 00:21:18.671 }, 00:21:18.671 "method": "bdev_nvme_attach_controller" 00:21:18.671 }' 00:21:18.671 [2024-10-30 12:32:51.152805] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:18.671 [2024-10-30 12:32:51.152888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659195 ] 00:21:18.671 [2024-10-30 12:32:51.225376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.671 [2024-10-30 12:32:51.286503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.570 Running I/O for 10 seconds... 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:20.829 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:21.087 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:21.087 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:21.087 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:21.087 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:21.087 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.087 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.087 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.087 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=72 00:21:21.087 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:21:21.087 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:21.350 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:21.350 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:21.350 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:21.350 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:21.350 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.350 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.350 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.350 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 659130 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 659130 ']' 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 659130 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 659130 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 659130' 00:21:21.351 killing process with pid 659130 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 659130 00:21:21.351 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 659130 00:21:21.351 [2024-10-30 12:32:53.986479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.986996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.351 [2024-10-30 12:32:53.987296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.987308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.987320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.987332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.987343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.987355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.987369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.987382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14771b0 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.989993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.990005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.990017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.990028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.352 [2024-10-30 12:32:53.990039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.990051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144fd90 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.995923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.995958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.995973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.995986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.995998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.996663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b50 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.353 [2024-10-30 12:32:53.998973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.998984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.998997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:53.999606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478510 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.000903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.000931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.000945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.000957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.000943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.354 [2024-10-30 12:32:54.000969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.000982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.000988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.354 [2024-10-30 12:32:54.000994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.001007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.001019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with [2024-10-30 12:32:54.001018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:12the state(6) to be set 00:21:21.354 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.354 [2024-10-30 12:32:54.001034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.001037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.354 [2024-10-30 12:32:54.001046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.001055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.354 [2024-10-30 12:32:54.001059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.001070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.354 [2024-10-30 12:32:54.001072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.354 [2024-10-30 12:32:54.001086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.001110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:12[2024-10-30 12:32:54.001182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with [2024-10-30 12:32:54.001197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:21.355 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with [2024-10-30 12:32:54.001461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:1the state(6) to be set 00:21:21.355 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1[2024-10-30 12:32:54.001527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.001541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 [2024-10-30 12:32:54.001604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.001629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.355 the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.355 [2024-10-30 12:32:54.001644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.355 [2024-10-30 12:32:54.001655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.001680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1[2024-10-30 12:32:54.001705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.001719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.001746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with [2024-10-30 12:32:54.001750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:21.356 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.001781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.001806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14789e0 is same with the state(6) to be set 00:21:21.356 [2024-10-30 12:32:54.001829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.001843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.001871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.001899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.001928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.001955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.001983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.001998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.002011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.002025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.002038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.002058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.002078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.002094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.002108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.002122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.002136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.002150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.002164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.002178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.002191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.356 [2024-10-30 12:32:54.002206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.356 [2024-10-30 12:32:54.002219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.002982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.002996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.003011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.357 [2024-10-30 12:32:54.003033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.357 [2024-10-30 12:32:54.003077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:21.357 [2024-10-30 12:32:54.003263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.357 [2024-10-30 12:32:54.003452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.003667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with [2024-10-30 12:32:54.003676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:21:21.358 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 [2024-10-30 12:32:54.003697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-10-30 12:32:54.003696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:21.358 the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-30 12:32:54.003713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with [2024-10-30 12:32:54.003728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:21:21.358 id:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.003743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 [2024-10-30 12:32:54.003756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.003769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 [2024-10-30 12:32:54.003784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbec310 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.003853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 [2024-10-30 12:32:54.003867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.003885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 [2024-10-30 12:32:54.003898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.003911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 [2024-10-30 12:32:54.003923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.003936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 [2024-10-30 12:32:54.003948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf4270 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.003992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.004004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.004007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.004016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.004028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with [2024-10-30 12:32:54.004028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:21:21.358 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 [2024-10-30 12:32:54.004044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.004046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.004062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-30 12:32:54.004061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.004078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with [2024-10-30 12:32:54.004079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:21:21.358 id:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.004093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.004094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 [2024-10-30 12:32:54.004105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.004110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.358 [2024-10-30 12:32:54.004118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.004123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.358 [2024-10-30 12:32:54.004130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.004136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e110 is same with the state(6) to be set 00:21:21.358 [2024-10-30 12:32:54.004143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.004155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.004167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.004179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.004191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.004193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-10-30 12:32:54.004202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478eb0 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:21.359 the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.004221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1052e10 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.004405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1017810 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.004579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbebb30 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.004746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf66f0 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.004912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.004982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.004995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.005008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.359 [2024-10-30 12:32:54.005021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.005033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbed290 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.005162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-10-30 12:32:54.005185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.005206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-10-30 12:32:54.005221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.005237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-10-30 12:32:54.005251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.005282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-10-30 12:32:54.005310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.005313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with [2024-10-30 12:32:54.005325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12the state(6) to be set 00:21:21.359 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-10-30 12:32:54.005342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 [2024-10-30 12:32:54.005346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.005357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-10-30 12:32:54.005361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.005372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.005374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.359 the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.005387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.359 [2024-10-30 12:32:54.005390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-10-30 12:32:54.005400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 [2024-10-30 12:32:54.005413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 [2024-10-30 12:32:54.005437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.005473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 [2024-10-30 12:32:54.005512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with [2024-10-30 12:32:54.005524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1the state(6) to be set 00:21:21.360 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 [2024-10-30 12:32:54.005552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 [2024-10-30 12:32:54.005576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.005616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 [2024-10-30 12:32:54.005655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.005680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 [2024-10-30 12:32:54.005726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 [2024-10-30 12:32:54.005752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 [2024-10-30 12:32:54.005815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.005850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.360 [2024-10-30 12:32:54.005889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-10-30 12:32:54.005901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.360 [2024-10-30 12:32:54.005915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.005916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.005930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.005933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.005942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.005947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.005954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.005962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.005966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.005976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.005978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.005990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.005992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-30 12:32:54.006073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1[2024-10-30 12:32:54.006090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with [2024-10-30 12:32:54.006104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:21.361 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207de0 is same with the state(6) to be set 00:21:21.361 [2024-10-30 12:32:54.006151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.361 [2024-10-30 12:32:54.006486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.361 [2024-10-30 12:32:54.006500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.362 [2024-10-30 12:32:54.006860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.362 [2024-10-30 12:32:54.006952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.006978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.006992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.362 [2024-10-30 12:32:54.007565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.007797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12082b0 is same with the state(6) to be set 00:21:21.363 [2024-10-30 12:32:54.028461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.028943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.028958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.634 [2024-10-30 12:32:54.029522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.634 [2024-10-30 12:32:54.029538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.029978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.029994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.635 [2024-10-30 12:32:54.030798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.635 [2024-10-30 12:32:54.030812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.030827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.030842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.030858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.030872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.030887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.030901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.030921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.030936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.030952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.030966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.030982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.030996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.031012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.031027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.031042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.031057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.031075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.031089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.031105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.031119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.031135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.031149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.032894] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:21.636 [2024-10-30 12:32:54.032943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbec310 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.033009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf4270 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.033049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e110 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.033102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.636 [2024-10-30 12:32:54.033124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.033140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.636 [2024-10-30 12:32:54.033154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.033168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.636 [2024-10-30 12:32:54.033181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.033204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.636 [2024-10-30 12:32:54.033219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.033232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1017040 is same with the state(6) to be set 00:21:21.636 [2024-10-30 12:32:54.033271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1052e10 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.033324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.636 [2024-10-30 12:32:54.033346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.033361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.636 [2024-10-30 12:32:54.033374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.033387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.636 [2024-10-30 12:32:54.033401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.033414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:21.636 [2024-10-30 12:32:54.033427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.033441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fd30 is same with the state(6) to be set 00:21:21.636 [2024-10-30 12:32:54.033471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1017810 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.033498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbebb30 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.033524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf66f0 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.033553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbed290 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.036247] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:21.636 [2024-10-30 12:32:54.036297] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:21.636 [2024-10-30 12:32:54.037223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.636 [2024-10-30 12:32:54.037264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbec310 with addr=10.0.0.2, port=4420 00:21:21.636 [2024-10-30 12:32:54.037284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbec310 is same with the state(6) to be set 00:21:21.636 [2024-10-30 12:32:54.037372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.636 [2024-10-30 12:32:54.037399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbed290 with addr=10.0.0.2, port=4420 00:21:21.636 [2024-10-30 12:32:54.037415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbed290 is same with the state(6) to be set 00:21:21.636 [2024-10-30 12:32:54.037503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.636 [2024-10-30 12:32:54.037527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebb30 with addr=10.0.0.2, port=4420 00:21:21.636 [2024-10-30 12:32:54.037550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbebb30 is same with the state(6) to be set 00:21:21.636 [2024-10-30 12:32:54.038223] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:21.636 [2024-10-30 12:32:54.038312] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:21.636 [2024-10-30 12:32:54.038554] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:21.636 [2024-10-30 12:32:54.038588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbec310 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.038612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbed290 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.038631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbebb30 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.038691] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:21.636 [2024-10-30 12:32:54.038821] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:21.636 [2024-10-30 12:32:54.038891] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:21.636 [2024-10-30 12:32:54.038958] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:21.636 [2024-10-30 12:32:54.039001] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:21.636 [2024-10-30 12:32:54.039021] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:21.636 [2024-10-30 12:32:54.039038] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:21.636 [2024-10-30 12:32:54.039063] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:21.636 [2024-10-30 12:32:54.039079] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:21.636 [2024-10-30 12:32:54.039091] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:21.636 [2024-10-30 12:32:54.039111] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:21.636 [2024-10-30 12:32:54.039125] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:21.636 [2024-10-30 12:32:54.039139] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:21.636 [2024-10-30 12:32:54.039241] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:21.636 [2024-10-30 12:32:54.039273] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:21.636 [2024-10-30 12:32:54.039290] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:21.636 [2024-10-30 12:32:54.042913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1017040 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.042970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104fd30 (9): Bad file descriptor 00:21:21.636 [2024-10-30 12:32:54.043139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.043165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.043193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.043209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.636 [2024-10-30 12:32:54.043226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.636 [2024-10-30 12:32:54.043251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.043975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.043989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.637 [2024-10-30 12:32:54.044526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.637 [2024-10-30 12:32:54.044542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.044978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.044995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.045009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.045024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.045039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.045054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.045068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.045083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.045098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.045113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.045127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.045141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109aa90 is same with the state(6) to be set 00:21:21.638 [2024-10-30 12:32:54.046424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.046453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.046474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.046490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.046506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.046521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.046536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.046550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.046566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.046579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.046596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.046610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.046626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.046640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.046656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.046670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.638 [2024-10-30 12:32:54.046685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.638 [2024-10-30 12:32:54.046699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.046715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.046729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.046745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.046761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.046777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.046791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.046807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.046821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.046840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.046856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.046872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.046886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.046902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.046916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.046932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.046947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.046963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.046977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.046993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.639 [2024-10-30 12:32:54.047960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.639 [2024-10-30 12:32:54.047976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.047990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.048410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.048425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffdd50 is same with the state(6) to be set 00:21:21.640 [2024-10-30 12:32:54.049656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.049701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.049733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.049764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.049793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.049822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.049852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.049881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.049911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.049941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.049976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.049991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.640 [2024-10-30 12:32:54.050378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.640 [2024-10-30 12:32:54.050392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.050983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.050999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.051013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.051028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.051042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.051058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.051072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.051088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.051101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.051117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.051131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.051147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.051160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.051180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.051195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.051211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.051230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.051246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.059878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.059893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10df720 is same with the state(6) to be set 00:21:21.641 [2024-10-30 12:32:54.061285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.641 [2024-10-30 12:32:54.061311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.641 [2024-10-30 12:32:54.061336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.061980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.061995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.642 [2024-10-30 12:32:54.062608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.642 [2024-10-30 12:32:54.062624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.062972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.062988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.063002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.063018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.063032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.063048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.063062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.063078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.063091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.063107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.063124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.063141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.063155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.063170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.063185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.063201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.063214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.063230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.063244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.063265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e0c00 is same with the state(6) to be set 00:21:21.643 [2024-10-30 12:32:54.064539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.643 [2024-10-30 12:32:54.064958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.643 [2024-10-30 12:32:54.064971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.064987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.065973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.065988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.066004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.066018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.066034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.066048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.066064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.066078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.066093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.066108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.066124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.066138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.066153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.066167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.066183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.066198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.066213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.066228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.644 [2024-10-30 12:32:54.066245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.644 [2024-10-30 12:32:54.066266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.066284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.066302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.066318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.066332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.066352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.066367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.066383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.066397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.066414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.066428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.066444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.066458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.066474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.066488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.066504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.066518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.066532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6800 is same with the state(6) to be set 00:21:21.645 [2024-10-30 12:32:54.067740] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:21.645 [2024-10-30 12:32:54.067770] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:21.645 [2024-10-30 12:32:54.067789] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:21.645 [2024-10-30 12:32:54.067807] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:21.645 [2024-10-30 12:32:54.067924] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:21.645 [2024-10-30 12:32:54.068038] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:21.645 [2024-10-30 12:32:54.068273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.645 [2024-10-30 12:32:54.068315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf66f0 with addr=10.0.0.2, port=4420 00:21:21.645 [2024-10-30 12:32:54.068333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf66f0 is same with the state(6) to be set 00:21:21.645 [2024-10-30 12:32:54.068422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.645 [2024-10-30 12:32:54.068448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf4270 with addr=10.0.0.2, port=4420 00:21:21.645 [2024-10-30 12:32:54.068465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf4270 is same with the state(6) to be set 00:21:21.645 [2024-10-30 12:32:54.068555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.645 [2024-10-30 12:32:54.068581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e110 with addr=10.0.0.2, port=4420 00:21:21.645 [2024-10-30 12:32:54.068602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e110 is same with the state(6) to be set 00:21:21.645 [2024-10-30 12:32:54.068723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.645 [2024-10-30 12:32:54.068749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1017810 with addr=10.0.0.2, port=4420 00:21:21.645 [2024-10-30 12:32:54.068765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1017810 is same with the state(6) to be set 00:21:21.645 [2024-10-30 12:32:54.069901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.069925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.069947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.069963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.069980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.069994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.645 [2024-10-30 12:32:54.070687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.645 [2024-10-30 12:32:54.070703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.070717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.070733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.070747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.070764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.070778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.070794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.070808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.070824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.070839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.070855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.070870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.070885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.070900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.070916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.070930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.070946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.070960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.070976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.070990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.071891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.071906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e2180 is same with the state(6) to be set 00:21:21.646 [2024-10-30 12:32:54.073201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.073224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.073245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.073269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.646 [2024-10-30 12:32:54.073287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.646 [2024-10-30 12:32:54.073302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.073980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.073995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.647 [2024-10-30 12:32:54.074434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.647 [2024-10-30 12:32:54.074448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.074975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.074990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.075006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.075021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.075040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.075055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.075071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.075085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.075101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.075115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.075131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.075145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.075161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.648 [2024-10-30 12:32:54.075175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:21.648 [2024-10-30 12:32:54.075189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e35c0 is same with the state(6) to be set 00:21:21.648 [2024-10-30 12:32:54.077828] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:21.648 [2024-10-30 12:32:54.077866] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:21.648 [2024-10-30 12:32:54.077885] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:21.648 [2024-10-30 12:32:54.077904] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:21.648 task offset: 24576 on job bdev=Nvme4n1 fails 00:21:21.648 00:21:21.648 Latency(us) 00:21:21.648 [2024-10-30T11:32:54.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.648 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.648 Job: Nvme1n1 ended in about 0.94 seconds with error 00:21:21.648 Verification LBA range: start 0x0 length 0x400 00:21:21.648 Nvme1n1 : 0.94 210.41 13.15 68.36 0.00 227025.00 15922.82 253211.69 00:21:21.648 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.648 Job: Nvme2n1 ended in about 0.92 seconds with error 00:21:21.648 Verification LBA range: start 0x0 length 0x400 00:21:21.648 Nvme2n1 : 0.92 207.63 12.98 69.21 0.00 224008.15 19806.44 253211.69 00:21:21.648 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.648 Job: Nvme3n1 ended in about 0.93 seconds with error 00:21:21.648 Verification LBA range: start 0x0 length 0x400 00:21:21.648 Nvme3n1 : 0.93 207.38 12.96 69.13 0.00 219673.41 17185.00 240784.12 00:21:21.648 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.648 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:21.648 Verification LBA range: start 0x0 length 0x400 00:21:21.648 Nvme4n1 : 0.92 208.16 13.01 69.39 0.00 214224.97 32622.36 256318.58 00:21:21.648 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.648 Job: Nvme5n1 ended in about 0.94 seconds with error 00:21:21.648 Verification LBA range: start 0x0 length 0x400 00:21:21.648 Nvme5n1 : 0.94 136.24 8.52 68.12 0.00 285408.52 19320.98 260978.92 00:21:21.648 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.648 Job: Nvme6n1 ended in about 0.95 seconds with error 00:21:21.648 Verification LBA range: start 0x0 length 0x400 00:21:21.648 Nvme6n1 : 0.95 134.59 8.41 67.30 0.00 283129.68 21456.97 253211.69 00:21:21.648 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.648 Job: Nvme7n1 ended in about 0.95 seconds with error 00:21:21.648 Verification LBA range: start 0x0 length 0x400 00:21:21.648 Nvme7n1 : 0.95 134.13 8.38 67.06 0.00 278053.93 21359.88 301368.51 00:21:21.648 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.648 Job: Nvme8n1 ended in about 0.96 seconds with error 00:21:21.648 Verification LBA range: start 0x0 length 0x400 00:21:21.648 Nvme8n1 : 0.96 132.93 8.31 66.46 0.00 275030.47 23592.96 254765.13 00:21:21.648 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.648 Job: Nvme9n1 ended in about 0.97 seconds with error 00:21:21.648 Verification LBA range: start 0x0 length 0x400 00:21:21.648 Nvme9n1 : 0.97 132.48 8.28 66.24 0.00 270363.31 18932.62 250104.79 00:21:21.648 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:21.648 Job: Nvme10n1 ended in about 0.96 seconds with error 00:21:21.648 Verification LBA range: start 0x0 length 0x400 00:21:21.648 Nvme10n1 : 0.96 133.67 8.35 66.84 0.00 261733.39 22427.88 284280.60 00:21:21.648 [2024-10-30T11:32:54.329Z] =================================================================================================================== 00:21:21.648 [2024-10-30T11:32:54.329Z] Total : 1637.62 102.35 678.10 0.00 249973.27 15922.82 301368.51 00:21:21.648 [2024-10-30 12:32:54.106268] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:21.648 [2024-10-30 12:32:54.106362] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:21.648 [2024-10-30 12:32:54.106665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.648 [2024-10-30 12:32:54.106703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1052e10 with addr=10.0.0.2, port=4420 00:21:21.648 [2024-10-30 12:32:54.106723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1052e10 is same with the state(6) to be set 00:21:21.648 [2024-10-30 12:32:54.106753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf66f0 (9): Bad file descriptor 00:21:21.649 [2024-10-30 12:32:54.106784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf4270 (9): Bad file descriptor 00:21:21.649 [2024-10-30 12:32:54.106803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e110 (9): Bad file descriptor 00:21:21.649 [2024-10-30 12:32:54.106822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1017810 (9): Bad file descriptor 00:21:21.649 [2024-10-30 12:32:54.107159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.649 [2024-10-30 12:32:54.107191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbebb30 with addr=10.0.0.2, port=4420 00:21:21.649 [2024-10-30 12:32:54.107209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbebb30 is same with the state(6) to be set 00:21:21.649 [2024-10-30 12:32:54.107307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.649 [2024-10-30 12:32:54.107333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbed290 with addr=10.0.0.2, port=4420 00:21:21.649 [2024-10-30 12:32:54.107349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbed290 is same with the state(6) to be set 00:21:21.649 [2024-10-30 12:32:54.107439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.649 [2024-10-30 12:32:54.107463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbec310 with addr=10.0.0.2, port=4420 00:21:21.649 [2024-10-30 12:32:54.107480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbec310 is same with the state(6) to be set 00:21:21.649 [2024-10-30 12:32:54.107561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.649 [2024-10-30 12:32:54.107597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1017040 with addr=10.0.0.2, port=4420 00:21:21.649 [2024-10-30 12:32:54.107615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1017040 is same with the state(6) to be set 00:21:21.649 [2024-10-30 12:32:54.107712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.649 [2024-10-30 12:32:54.107738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x104fd30 with addr=10.0.0.2, port=4420 00:21:21.649 [2024-10-30 12:32:54.107754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104fd30 is same with the state(6) to be set 00:21:21.649 [2024-10-30 12:32:54.107774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1052e10 (9): Bad file descriptor 00:21:21.649 [2024-10-30 12:32:54.107793] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:21.649 [2024-10-30 12:32:54.107808] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:21.649 [2024-10-30 12:32:54.107824] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:21.649 [2024-10-30 12:32:54.107847] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:21.649 [2024-10-30 12:32:54.107863] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:21.649 [2024-10-30 12:32:54.107877] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:21.649 [2024-10-30 12:32:54.107895] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:21.649 [2024-10-30 12:32:54.107909] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:21.649 [2024-10-30 12:32:54.107922] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:21.649 [2024-10-30 12:32:54.107940] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:21.649 [2024-10-30 12:32:54.107954] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:21.649 [2024-10-30 12:32:54.107968] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:21.649 [2024-10-30 12:32:54.107991] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:21.649 [2024-10-30 12:32:54.108016] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:21.649 [2024-10-30 12:32:54.108038] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:21.649 [2024-10-30 12:32:54.108060] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:21.649 [2024-10-30 12:32:54.108080] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:21.649 [2024-10-30 12:32:54.108741] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:21.649 [2024-10-30 12:32:54.108768] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:21.649 [2024-10-30 12:32:54.108784] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:21.649 [2024-10-30 12:32:54.108798] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:21.649 [2024-10-30 12:32:54.108816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbebb30 (9): Bad file descriptor 00:21:21.649 [2024-10-30 12:32:54.108858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbed290 (9): Bad file descriptor 00:21:21.649 [2024-10-30 12:32:54.108878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbec310 (9): Bad file descriptor 00:21:21.649 [2024-10-30 12:32:54.108896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1017040 (9): Bad file descriptor 00:21:21.649 [2024-10-30 12:32:54.108914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x104fd30 (9): Bad file descriptor 00:21:21.649 [2024-10-30 12:32:54.108931] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:21.649 [2024-10-30 12:32:54.108944] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:21.649 [2024-10-30 12:32:54.108957] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:21.649 [2024-10-30 12:32:54.109022] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:21.649 [2024-10-30 12:32:54.109045] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:21.649 [2024-10-30 12:32:54.109059] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:21.649 [2024-10-30 12:32:54.109072] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:21.649 [2024-10-30 12:32:54.109089] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:21.649 [2024-10-30 12:32:54.109104] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:21.649 [2024-10-30 12:32:54.109117] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:21.649 [2024-10-30 12:32:54.109134] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:21.649 [2024-10-30 12:32:54.109149] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:21.649 [2024-10-30 12:32:54.109162] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:21.649 [2024-10-30 12:32:54.109178] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:21.649 [2024-10-30 12:32:54.109193] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:21.649 [2024-10-30 12:32:54.109208] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:21.649 [2024-10-30 12:32:54.109227] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:21.649 [2024-10-30 12:32:54.109242] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:21.649 [2024-10-30 12:32:54.109263] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:21.649 [2024-10-30 12:32:54.109330] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:21.649 [2024-10-30 12:32:54.109351] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:21.649 [2024-10-30 12:32:54.109365] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:21.649 [2024-10-30 12:32:54.109379] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:21.649 [2024-10-30 12:32:54.109392] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:21.908 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 659195 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 659195 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 659195 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:23.286 rmmod nvme_tcp 00:21:23.286 rmmod nvme_fabrics 00:21:23.286 rmmod nvme_keyring 00:21:23.286 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 659130 ']' 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 659130 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 659130 ']' 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 659130 00:21:23.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (659130) - No such process 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 659130 is not found' 00:21:23.287 Process with pid 659130 is not found 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.287 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:25.194 00:21:25.194 real 0m7.678s 00:21:25.194 user 0m19.329s 00:21:25.194 sys 0m1.511s 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:25.194 ************************************ 00:21:25.194 END TEST nvmf_shutdown_tc3 00:21:25.194 ************************************ 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:25.194 ************************************ 00:21:25.194 START TEST nvmf_shutdown_tc4 00:21:25.194 ************************************ 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:25.194 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:25.194 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.194 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:25.195 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:25.195 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.195 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:25.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:21:25.454 00:21:25.454 --- 10.0.0.2 ping statistics --- 00:21:25.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.454 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:21:25.454 00:21:25.454 --- 10.0.0.1 ping statistics --- 00:21:25.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.454 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=660107 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 660107 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 660107 ']' 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:25.454 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.454 [2024-10-30 12:32:57.972819] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:25.454 [2024-10-30 12:32:57.972906] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.454 [2024-10-30 12:32:58.047175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:25.454 [2024-10-30 12:32:58.107608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.455 [2024-10-30 12:32:58.107669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.455 [2024-10-30 12:32:58.107683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.455 [2024-10-30 12:32:58.107703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.455 [2024-10-30 12:32:58.107712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.455 [2024-10-30 12:32:58.109238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.455 [2024-10-30 12:32:58.109304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:25.455 [2024-10-30 12:32:58.109389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:25.455 [2024-10-30 12:32:58.109391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.712 [2024-10-30 12:32:58.246062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.712 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.713 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:25.713 Malloc1 00:21:25.713 [2024-10-30 12:32:58.334157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.713 Malloc2 00:21:25.970 Malloc3 00:21:25.970 Malloc4 00:21:25.970 Malloc5 00:21:25.970 Malloc6 00:21:25.970 Malloc7 00:21:25.970 Malloc8 00:21:26.227 Malloc9 00:21:26.228 Malloc10 00:21:26.228 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.228 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:26.228 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:26.228 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:26.228 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=660284 00:21:26.228 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:26.228 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:26.228 [2024-10-30 12:32:58.854114] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 660107 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 660107 ']' 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 660107 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 660107 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 660107' 00:21:31.497 killing process with pid 660107 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 660107 00:21:31.497 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 660107 00:21:31.497 [2024-10-30 12:33:03.848961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e187c0 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.849100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e187c0 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.849123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e187c0 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.849143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e187c0 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.849156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e187c0 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.849168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e187c0 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.849181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e187c0 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.849204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e187c0 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.851356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19160 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.851416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19160 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.851434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19160 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.851447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19160 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.851460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19160 is same with the state(6) to be set 00:21:31.497 [2024-10-30 12:33:03.851472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19160 is same with the state(6) to be set 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 [2024-10-30 12:33:03.853949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 starting I/O failed: -6 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.497 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 [2024-10-30 12:33:03.855172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 [2024-10-30 12:33:03.856377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 [2024-10-30 12:33:03.857131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with Write completed with error (sct=0, sc=8) 00:21:31.498 the state(6) to be set 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 [2024-10-30 12:33:03.857168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with the state(6) to be set 00:21:31.498 starting I/O failed: -6 00:21:31.498 [2024-10-30 12:33:03.857184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with the state(6) to be set 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 [2024-10-30 12:33:03.857196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with the state(6) to be set 00:21:31.498 starting I/O failed: -6 00:21:31.498 [2024-10-30 12:33:03.857208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with the state(6) to be set 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 [2024-10-30 12:33:03.857220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with the state(6) to be set 00:21:31.498 [2024-10-30 12:33:03.857234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with Write completed with error (sct=0, sc=8) 00:21:31.498 the state(6) to be set 00:21:31.498 starting I/O failed: -6 00:21:31.498 [2024-10-30 12:33:03.857247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with the state(6) to be set 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 [2024-10-30 12:33:03.857267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with the state(6) to be set 00:21:31.498 starting I/O failed: -6 00:21:31.498 [2024-10-30 12:33:03.857281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with the state(6) to be set 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 [2024-10-30 12:33:03.857322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1ae40 is same with the state(6) to be set 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.498 Write completed with error (sct=0, sc=8) 00:21:31.498 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 [2024-10-30 12:33:03.858018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.499 NVMe io qpair process completion error 00:21:31.499 [2024-10-30 12:33:03.858254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b7e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.858308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b7e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.858335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b7e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.858353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b7e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.858365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b7e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.858378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b7e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.858390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b7e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.858403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b7e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.858415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b7e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.863891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb73e0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.864445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb78d0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.864481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb78d0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.864498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb78d0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.864511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb78d0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.864523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb78d0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.864614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb78d0 is same with the state(6) to be set 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 [2024-10-30 12:33:03.865486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb7dc0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.865521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb7dc0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.865517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.499 [2024-10-30 12:33:03.865538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb7dc0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.865551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb7dc0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.865564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb7dc0 is same with the state(6) to be set 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 [2024-10-30 12:33:03.866236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecfec0 is same with Write completed with error (sct=0, sc=8) 00:21:31.499 the state(6) to be set 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 [2024-10-30 12:33:03.866290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecfec0 is same with starting I/O failed: -6 00:21:31.499 the state(6) to be set 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 [2024-10-30 12:33:03.866311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecfec0 is same with the state(6) to be set 00:21:31.499 [2024-10-30 12:33:03.866325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecfec0 is same with the state(6) to be set 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 [2024-10-30 12:33:03.866337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecfec0 is same with the state(6) to be set 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 [2024-10-30 12:33:03.866350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecfec0 is same with the state(6) to be set 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 [2024-10-30 12:33:03.866363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecfec0 is same with the state(6) to be set 00:21:31.499 starting I/O failed: -6 00:21:31.499 [2024-10-30 12:33:03.866376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecfec0 is same with the state(6) to be set 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 [2024-10-30 12:33:03.866388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecfec0 is same with the state(6) to be set 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 [2024-10-30 12:33:03.866497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 starting I/O failed: -6 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.499 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 [2024-10-30 12:33:03.867647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 Write completed with error (sct=0, sc=8) 00:21:31.500 starting I/O failed: -6 00:21:31.500 [2024-10-30 12:33:03.869431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.500 NVMe io qpair process completion error 00:21:31.500 [2024-10-30 12:33:03.869655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8780 is same with the state(6) to be set 00:21:31.500 [2024-10-30 12:33:03.869701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8780 is same with the state(6) to be set 00:21:31.500 [2024-10-30 12:33:03.869716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8780 is same with the state(6) to be set 00:21:31.500 [2024-10-30 12:33:03.869729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8780 is same with the state(6) to be set 00:21:31.500 [2024-10-30 12:33:03.869741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8780 is same with the state(6) to be set 00:21:31.500 [2024-10-30 12:33:03.869754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8780 is same with the state(6) to be set 00:21:31.500 [2024-10-30 12:33:03.869766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8780 is same with the state(6) to be set 00:21:31.500 [2024-10-30 12:33:03.869778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8780 is same with the state(6) to be set 00:21:31.501 [2024-10-30 12:33:03.869801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8780 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 [2024-10-30 12:33:03.870266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 [2024-10-30 12:33:03.870303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 [2024-10-30 12:33:03.870339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 [2024-10-30 12:33:03.870354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8c70 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 [2024-10-30 12:33:03.870902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9140 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9140 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9140 is same with starting I/O failed: -6 00:21:31.501 the state(6) to be set 00:21:31.501 [2024-10-30 12:33:03.870961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9140 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9140 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.870987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9140 is same with starting I/O failed: -6 00:21:31.501 the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 [2024-10-30 12:33:03.871014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9140 is same with the state(6) to be set 00:21:31.501 [2024-10-30 12:33:03.871029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb9140 is same with the state(6) to be set 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 [2024-10-30 12:33:03.871659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 Write completed with error (sct=0, sc=8) 00:21:31.501 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 [2024-10-30 12:33:03.873001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 [2024-10-30 12:33:03.874840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.502 NVMe io qpair process completion error 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 Write completed with error (sct=0, sc=8) 00:21:31.502 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 [2024-10-30 12:33:03.876110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 [2024-10-30 12:33:03.877159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 [2024-10-30 12:33:03.878366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.503 Write completed with error (sct=0, sc=8) 00:21:31.503 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 [2024-10-30 12:33:03.880110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.504 NVMe io qpair process completion error 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 [2024-10-30 12:33:03.881438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 [2024-10-30 12:33:03.882520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.504 starting I/O failed: -6 00:21:31.504 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 [2024-10-30 12:33:03.883724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 [2024-10-30 12:33:03.885948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.505 NVMe io qpair process completion error 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 [2024-10-30 12:33:03.887330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 starting I/O failed: -6 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.505 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 [2024-10-30 12:33:03.888440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 [2024-10-30 12:33:03.889561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.506 starting I/O failed: -6 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.506 starting I/O failed: -6 00:21:31.506 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 [2024-10-30 12:33:03.893290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.507 NVMe io qpair process completion error 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 [2024-10-30 12:33:03.894706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 [2024-10-30 12:33:03.895726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 Write completed with error (sct=0, sc=8) 00:21:31.507 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 [2024-10-30 12:33:03.896878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.508 starting I/O failed: -6 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 [2024-10-30 12:33:03.901414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.508 NVMe io qpair process completion error 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 [2024-10-30 12:33:03.902775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 starting I/O failed: -6 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.508 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 [2024-10-30 12:33:03.903907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 [2024-10-30 12:33:03.905049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.509 starting I/O failed: -6 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.509 starting I/O failed: -6 00:21:31.509 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 [2024-10-30 12:33:03.907300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.510 NVMe io qpair process completion error 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 [2024-10-30 12:33:03.908514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 [2024-10-30 12:33:03.909617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.510 starting I/O failed: -6 00:21:31.510 starting I/O failed: -6 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.510 starting I/O failed: -6 00:21:31.510 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 [2024-10-30 12:33:03.911013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 [2024-10-30 12:33:03.913284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.511 NVMe io qpair process completion error 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.511 starting I/O failed: -6 00:21:31.511 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 [2024-10-30 12:33:03.914466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 [2024-10-30 12:33:03.915579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 [2024-10-30 12:33:03.916796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.512 Write completed with error (sct=0, sc=8) 00:21:31.512 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 Write completed with error (sct=0, sc=8) 00:21:31.513 starting I/O failed: -6 00:21:31.513 [2024-10-30 12:33:03.920950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:31.513 NVMe io qpair process completion error 00:21:31.513 Initializing NVMe Controllers 00:21:31.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:31.513 Controller IO queue size 128, less than required. 00:21:31.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.513 Controller IO queue size 128, less than required. 00:21:31.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:31.513 Controller IO queue size 128, less than required. 00:21:31.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:31.513 Controller IO queue size 128, less than required. 00:21:31.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:31.513 Controller IO queue size 128, less than required. 00:21:31.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:31.513 Controller IO queue size 128, less than required. 00:21:31.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:31.513 Controller IO queue size 128, less than required. 00:21:31.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:31.513 Controller IO queue size 128, less than required. 00:21:31.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:31.513 Controller IO queue size 128, less than required. 00:21:31.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:31.513 Controller IO queue size 128, less than required. 00:21:31.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:31.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:31.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:31.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:31.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:31.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:31.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:31.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:31.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:31.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:31.513 Initialization complete. Launching workers. 00:21:31.513 ======================================================== 00:21:31.513 Latency(us) 00:21:31.513 Device Information : IOPS MiB/s Average min max 00:21:31.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1731.98 74.42 73929.90 1139.68 138106.03 00:21:31.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1785.72 76.73 70893.83 1072.69 128673.78 00:21:31.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1751.40 75.26 73084.97 997.04 126848.38 00:21:31.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1722.05 73.99 74364.02 1175.51 141011.60 00:21:31.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1732.19 74.43 73089.02 854.36 123395.65 00:21:31.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1776.22 76.32 71301.41 991.23 121307.25 00:21:31.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1728.31 74.26 73303.98 1083.80 123815.82 00:21:31.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1761.76 75.70 71940.06 881.26 121206.93 00:21:31.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1758.09 75.54 72123.40 1107.25 128689.23 00:21:31.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1777.30 76.37 71402.65 979.23 132952.42 00:21:31.513 ======================================================== 00:21:31.513 Total : 17525.04 753.03 72529.95 854.36 141011.60 00:21:31.513 00:21:31.513 [2024-10-30 12:33:03.925796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3d2c0 is same with the state(6) to be set 00:21:31.513 [2024-10-30 12:33:03.925888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3e720 is same with the state(6) to be set 00:21:31.513 [2024-10-30 12:33:03.925948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3cd10 is same with the state(6) to be set 00:21:31.513 [2024-10-30 12:33:03.926007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3d920 is same with the state(6) to be set 00:21:31.513 [2024-10-30 12:33:03.926065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3c9e0 is same with the state(6) to be set 00:21:31.513 [2024-10-30 12:33:03.926122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3d5f0 is same with the state(6) to be set 00:21:31.513 [2024-10-30 12:33:03.926178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3eae0 is same with the state(6) to be set 00:21:31.513 [2024-10-30 12:33:03.926234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3e900 is same with the state(6) to be set 00:21:31.514 [2024-10-30 12:33:03.926297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3dc50 is same with the state(6) to be set 00:21:31.514 [2024-10-30 12:33:03.926356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3c6b0 is same with the state(6) to be set 00:21:31.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:31.774 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 660284 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 660284 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 660284 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:32.713 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:32.714 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:32.714 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.714 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:32.714 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.714 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.974 rmmod nvme_tcp 00:21:32.974 rmmod nvme_fabrics 00:21:32.974 rmmod nvme_keyring 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 660107 ']' 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 660107 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 660107 ']' 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 660107 00:21:32.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (660107) - No such process 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 660107 is not found' 00:21:32.974 Process with pid 660107 is not found 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.974 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.878 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.878 00:21:34.878 real 0m9.775s 00:21:34.878 user 0m22.786s 00:21:34.878 sys 0m5.972s 00:21:34.878 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:34.878 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:34.878 ************************************ 00:21:34.878 END TEST nvmf_shutdown_tc4 00:21:34.878 ************************************ 00:21:34.878 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:34.878 00:21:34.878 real 0m37.415s 00:21:34.878 user 1m39.775s 00:21:34.878 sys 0m12.473s 00:21:34.878 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:34.878 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.878 ************************************ 00:21:34.878 END TEST nvmf_shutdown 00:21:34.878 ************************************ 00:21:34.878 12:33:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:34.878 12:33:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:34.878 12:33:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:34.878 12:33:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:35.137 ************************************ 00:21:35.137 START TEST nvmf_nsid 00:21:35.137 ************************************ 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:35.137 * Looking for test storage... 00:21:35.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:35.137 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:35.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.138 --rc genhtml_branch_coverage=1 00:21:35.138 --rc genhtml_function_coverage=1 00:21:35.138 --rc genhtml_legend=1 00:21:35.138 --rc geninfo_all_blocks=1 00:21:35.138 --rc geninfo_unexecuted_blocks=1 00:21:35.138 00:21:35.138 ' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:35.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.138 --rc genhtml_branch_coverage=1 00:21:35.138 --rc genhtml_function_coverage=1 00:21:35.138 --rc genhtml_legend=1 00:21:35.138 --rc geninfo_all_blocks=1 00:21:35.138 --rc geninfo_unexecuted_blocks=1 00:21:35.138 00:21:35.138 ' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:35.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.138 --rc genhtml_branch_coverage=1 00:21:35.138 --rc genhtml_function_coverage=1 00:21:35.138 --rc genhtml_legend=1 00:21:35.138 --rc geninfo_all_blocks=1 00:21:35.138 --rc geninfo_unexecuted_blocks=1 00:21:35.138 00:21:35.138 ' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:35.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.138 --rc genhtml_branch_coverage=1 00:21:35.138 --rc genhtml_function_coverage=1 00:21:35.138 --rc genhtml_legend=1 00:21:35.138 --rc geninfo_all_blocks=1 00:21:35.138 --rc geninfo_unexecuted_blocks=1 00:21:35.138 00:21:35.138 ' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:35.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.138 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:37.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:37.670 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:37.670 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:37.670 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:37.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:21:37.670 00:21:37.670 --- 10.0.0.2 ping statistics --- 00:21:37.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.670 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:21:37.670 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:21:37.671 00:21:37.671 --- 10.0.0.1 ping statistics --- 00:21:37.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.671 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=663646 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 663646 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 663646 ']' 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:37.671 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:37.671 [2024-10-30 12:33:10.021395] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:37.671 [2024-10-30 12:33:10.021512] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.671 [2024-10-30 12:33:10.095829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.671 [2024-10-30 12:33:10.152344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.671 [2024-10-30 12:33:10.152405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.671 [2024-10-30 12:33:10.152429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.671 [2024-10-30 12:33:10.152440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.671 [2024-10-30 12:33:10.152450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.671 [2024-10-30 12:33:10.153044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=663671 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c6826be2-d8d2-40ce-b63b-f03112b69f98 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=81ef365a-baf0-4adb-a068-2740d21781b5 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=de4a1bd4-1a18-4fca-a735-67b990617677 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@60 -- # rpc_cmd 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.671 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:37.671 null0 00:21:37.671 null1 00:21:37.671 null2 00:21:37.671 [2024-10-30 12:33:10.329367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.671 [2024-10-30 12:33:10.342335] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:37.671 [2024-10-30 12:33:10.342418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid663671 ] 00:21:37.929 [2024-10-30 12:33:10.353592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.929 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.929 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@76 -- # waitforlisten 663671 /var/tmp/tgt2.sock 00:21:37.929 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 663671 ']' 00:21:37.929 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:37.929 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:37.929 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:37.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:37.929 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:37.929 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:37.929 [2024-10-30 12:33:10.410659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.929 [2024-10-30 12:33:10.469910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.187 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:38.187 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:21:38.187 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:38.753 [2024-10-30 12:33:11.137175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.753 [2024-10-30 12:33:11.153418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:38.753 nvme0n1 nvme0n2 00:21:38.753 nvme1n1 00:21:38.753 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@91 -- # nvme_connect 00:21:38.753 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:38.753 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@91 -- # ctrlr=nvme0 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@92 -- # waitforblk nvme0n1 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:21:39.329 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@93 -- # uuid2nguid c6826be2-d8d2-40ce-b63b-f03112b69f98 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@93 -- # nvme_get_nguid nvme0 1 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c6826be2d8d240ceb63bf03112b69f98 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C6826BE2D8D240CEB63BF03112B69F98 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@93 -- # [[ C6826BE2D8D240CEB63BF03112B69F98 == \C\6\8\2\6\B\E\2\D\8\D\2\4\0\C\E\B\6\3\B\F\0\3\1\1\2\B\6\9\F\9\8 ]] 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # waitforblk nvme0n2 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # uuid2nguid 81ef365a-baf0-4adb-a068-2740d21781b5 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # nvme_get_nguid nvme0 2 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=81ef365abaf04adba0682740d21781b5 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 81EF365ABAF04ADBA0682740D21781B5 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # [[ 81EF365ABAF04ADBA0682740D21781B5 == \8\1\E\F\3\6\5\A\B\A\F\0\4\A\D\B\A\0\6\8\2\7\4\0\D\2\1\7\8\1\B\5 ]] 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # waitforblk nvme0n3 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # uuid2nguid de4a1bd4-1a18-4fca-a735-67b990617677 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # nvme_get_nguid nvme0 3 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:40.262 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:40.520 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=de4a1bd41a184fcaa73567b990617677 00:21:40.520 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DE4A1BD41A184FCAA73567B990617677 00:21:40.520 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # [[ DE4A1BD41A184FCAA73567B990617677 == \D\E\4\A\1\B\D\4\1\A\1\8\4\F\C\A\A\7\3\5\6\7\B\9\9\0\6\1\7\6\7\7 ]] 00:21:40.520 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme disconnect -d /dev/nvme0 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # cleanup 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 663671 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 663671 ']' 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 663671 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 663671 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 663671' 00:21:40.520 killing process with pid 663671 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 663671 00:21:40.520 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 663671 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.086 rmmod nvme_tcp 00:21:41.086 rmmod nvme_fabrics 00:21:41.086 rmmod nvme_keyring 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 663646 ']' 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 663646 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 663646 ']' 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 663646 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 663646 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 663646' 00:21:41.086 killing process with pid 663646 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 663646 00:21:41.086 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 663646 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.344 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.887 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.887 00:21:43.887 real 0m8.358s 00:21:43.887 user 0m8.281s 00:21:43.887 sys 0m2.603s 00:21:43.887 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:43.887 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:43.887 ************************************ 00:21:43.887 END TEST nvmf_nsid 00:21:43.887 ************************************ 00:21:43.887 12:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:43.887 00:21:43.887 real 11m42.836s 00:21:43.887 user 27m42.174s 00:21:43.887 sys 2m50.027s 00:21:43.887 12:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:43.887 12:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:43.887 ************************************ 00:21:43.887 END TEST nvmf_target_extra 00:21:43.887 ************************************ 00:21:43.887 12:33:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:43.887 12:33:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:43.887 12:33:15 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:43.887 12:33:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:43.887 ************************************ 00:21:43.887 START TEST nvmf_host 00:21:43.887 ************************************ 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:43.887 * Looking for test storage... 00:21:43.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.887 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:43.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.888 --rc genhtml_branch_coverage=1 00:21:43.888 --rc genhtml_function_coverage=1 00:21:43.888 --rc genhtml_legend=1 00:21:43.888 --rc geninfo_all_blocks=1 00:21:43.888 --rc geninfo_unexecuted_blocks=1 00:21:43.888 00:21:43.888 ' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:43.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.888 --rc genhtml_branch_coverage=1 00:21:43.888 --rc genhtml_function_coverage=1 00:21:43.888 --rc genhtml_legend=1 00:21:43.888 --rc geninfo_all_blocks=1 00:21:43.888 --rc geninfo_unexecuted_blocks=1 00:21:43.888 00:21:43.888 ' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:43.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.888 --rc genhtml_branch_coverage=1 00:21:43.888 --rc genhtml_function_coverage=1 00:21:43.888 --rc genhtml_legend=1 00:21:43.888 --rc geninfo_all_blocks=1 00:21:43.888 --rc geninfo_unexecuted_blocks=1 00:21:43.888 00:21:43.888 ' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:43.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.888 --rc genhtml_branch_coverage=1 00:21:43.888 --rc genhtml_function_coverage=1 00:21:43.888 --rc genhtml_legend=1 00:21:43.888 --rc geninfo_all_blocks=1 00:21:43.888 --rc geninfo_unexecuted_blocks=1 00:21:43.888 00:21:43.888 ' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.888 ************************************ 00:21:43.888 START TEST nvmf_multicontroller 00:21:43.888 ************************************ 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:43.888 * Looking for test storage... 00:21:43.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.888 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:43.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.889 --rc genhtml_branch_coverage=1 00:21:43.889 --rc genhtml_function_coverage=1 00:21:43.889 --rc genhtml_legend=1 00:21:43.889 --rc geninfo_all_blocks=1 00:21:43.889 --rc geninfo_unexecuted_blocks=1 00:21:43.889 00:21:43.889 ' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:43.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.889 --rc genhtml_branch_coverage=1 00:21:43.889 --rc genhtml_function_coverage=1 00:21:43.889 --rc genhtml_legend=1 00:21:43.889 --rc geninfo_all_blocks=1 00:21:43.889 --rc geninfo_unexecuted_blocks=1 00:21:43.889 00:21:43.889 ' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:43.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.889 --rc genhtml_branch_coverage=1 00:21:43.889 --rc genhtml_function_coverage=1 00:21:43.889 --rc genhtml_legend=1 00:21:43.889 --rc geninfo_all_blocks=1 00:21:43.889 --rc geninfo_unexecuted_blocks=1 00:21:43.889 00:21:43.889 ' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:43.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.889 --rc genhtml_branch_coverage=1 00:21:43.889 --rc genhtml_function_coverage=1 00:21:43.889 --rc genhtml_legend=1 00:21:43.889 --rc geninfo_all_blocks=1 00:21:43.889 --rc geninfo_unexecuted_blocks=1 00:21:43.889 00:21:43.889 ' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.889 12:33:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:46.420 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:46.420 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:46.420 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:46.420 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.420 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:46.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:21:46.421 00:21:46.421 --- 10.0.0.2 ping statistics --- 00:21:46.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.421 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:21:46.421 00:21:46.421 --- 10.0.0.1 ping statistics --- 00:21:46.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.421 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=666225 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 666225 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 666225 ']' 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:46.421 12:33:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.421 [2024-10-30 12:33:18.871681] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:46.421 [2024-10-30 12:33:18.871768] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.421 [2024-10-30 12:33:18.942995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:46.421 [2024-10-30 12:33:19.000440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.421 [2024-10-30 12:33:19.000494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.421 [2024-10-30 12:33:19.000518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.421 [2024-10-30 12:33:19.000529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.421 [2024-10-30 12:33:19.000538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.421 [2024-10-30 12:33:19.001931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.421 [2024-10-30 12:33:19.001997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.421 [2024-10-30 12:33:19.002000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 [2024-10-30 12:33:19.134628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 Malloc0 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 [2024-10-30 12:33:19.191292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 [2024-10-30 12:33:19.199171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 Malloc1 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=666253 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.679 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 666253 /var/tmp/bdevperf.sock 00:21:46.680 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 666253 ']' 00:21:46.680 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.680 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:46.680 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.680 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:46.680 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.938 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:46.938 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:21:46.938 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:46.938 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.938 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.196 NVMe0n1 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.196 1 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.196 request: 00:21:47.196 { 00:21:47.196 "name": "NVMe0", 00:21:47.196 "trtype": "tcp", 00:21:47.196 "traddr": "10.0.0.2", 00:21:47.196 "adrfam": "ipv4", 00:21:47.196 "trsvcid": "4420", 00:21:47.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.196 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:47.196 "hostaddr": "10.0.0.1", 00:21:47.196 "prchk_reftag": false, 00:21:47.196 "prchk_guard": false, 00:21:47.196 "hdgst": false, 00:21:47.196 "ddgst": false, 00:21:47.196 "allow_unrecognized_csi": false, 00:21:47.196 "method": "bdev_nvme_attach_controller", 00:21:47.196 "req_id": 1 00:21:47.196 } 00:21:47.196 Got JSON-RPC error response 00:21:47.196 response: 00:21:47.196 { 00:21:47.196 "code": -114, 00:21:47.196 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:47.196 } 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.196 request: 00:21:47.196 { 00:21:47.196 "name": "NVMe0", 00:21:47.196 "trtype": "tcp", 00:21:47.196 "traddr": "10.0.0.2", 00:21:47.196 "adrfam": "ipv4", 00:21:47.196 "trsvcid": "4420", 00:21:47.196 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:47.196 "hostaddr": "10.0.0.1", 00:21:47.196 "prchk_reftag": false, 00:21:47.196 "prchk_guard": false, 00:21:47.196 "hdgst": false, 00:21:47.196 "ddgst": false, 00:21:47.196 "allow_unrecognized_csi": false, 00:21:47.196 "method": "bdev_nvme_attach_controller", 00:21:47.196 "req_id": 1 00:21:47.196 } 00:21:47.196 Got JSON-RPC error response 00:21:47.196 response: 00:21:47.196 { 00:21:47.196 "code": -114, 00:21:47.196 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:47.196 } 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.196 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.196 request: 00:21:47.196 { 00:21:47.196 "name": "NVMe0", 00:21:47.196 "trtype": "tcp", 00:21:47.196 "traddr": "10.0.0.2", 00:21:47.196 "adrfam": "ipv4", 00:21:47.196 "trsvcid": "4420", 00:21:47.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.196 "hostaddr": "10.0.0.1", 00:21:47.197 "prchk_reftag": false, 00:21:47.197 "prchk_guard": false, 00:21:47.197 "hdgst": false, 00:21:47.197 "ddgst": false, 00:21:47.197 "multipath": "disable", 00:21:47.197 "allow_unrecognized_csi": false, 00:21:47.197 "method": "bdev_nvme_attach_controller", 00:21:47.197 "req_id": 1 00:21:47.197 } 00:21:47.197 Got JSON-RPC error response 00:21:47.197 response: 00:21:47.197 { 00:21:47.197 "code": -114, 00:21:47.197 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:47.197 } 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.197 request: 00:21:47.197 { 00:21:47.197 "name": "NVMe0", 00:21:47.197 "trtype": "tcp", 00:21:47.197 "traddr": "10.0.0.2", 00:21:47.197 "adrfam": "ipv4", 00:21:47.197 "trsvcid": "4420", 00:21:47.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.197 "hostaddr": "10.0.0.1", 00:21:47.197 "prchk_reftag": false, 00:21:47.197 "prchk_guard": false, 00:21:47.197 "hdgst": false, 00:21:47.197 "ddgst": false, 00:21:47.197 "multipath": "failover", 00:21:47.197 "allow_unrecognized_csi": false, 00:21:47.197 "method": "bdev_nvme_attach_controller", 00:21:47.197 "req_id": 1 00:21:47.197 } 00:21:47.197 Got JSON-RPC error response 00:21:47.197 response: 00:21:47.197 { 00:21:47.197 "code": -114, 00:21:47.197 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:47.197 } 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.197 NVMe0n1 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.197 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.454 00:21:47.454 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.454 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:47.454 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:47.454 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.454 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.454 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.454 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:47.454 12:33:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:48.384 { 00:21:48.384 "results": [ 00:21:48.384 { 00:21:48.384 "job": "NVMe0n1", 00:21:48.384 "core_mask": "0x1", 00:21:48.384 "workload": "write", 00:21:48.384 "status": "finished", 00:21:48.384 "queue_depth": 128, 00:21:48.384 "io_size": 4096, 00:21:48.384 "runtime": 1.005094, 00:21:48.384 "iops": 18323.659279629566, 00:21:48.384 "mibps": 71.576794061053, 00:21:48.384 "io_failed": 0, 00:21:48.384 "io_timeout": 0, 00:21:48.384 "avg_latency_us": 6974.147674672555, 00:21:48.384 "min_latency_us": 5922.512592592592, 00:21:48.384 "max_latency_us": 19806.435555555556 00:21:48.384 } 00:21:48.384 ], 00:21:48.384 "core_count": 1 00:21:48.384 } 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 666253 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 666253 ']' 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 666253 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:48.384 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 666253 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 666253' 00:21:48.642 killing process with pid 666253 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 666253 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 666253 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:21:48.642 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:48.642 [2024-10-30 12:33:19.299744] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:48.642 [2024-10-30 12:33:19.299845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666253 ] 00:21:48.642 [2024-10-30 12:33:19.368347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.642 [2024-10-30 12:33:19.427101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.642 [2024-10-30 12:33:19.878995] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 6c320aa3-a6d5-4c4f-93bf-ba18ceda235a already exists 00:21:48.642 [2024-10-30 12:33:19.879055] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:6c320aa3-a6d5-4c4f-93bf-ba18ceda235a alias for bdev NVMe1n1 00:21:48.642 [2024-10-30 12:33:19.879069] bdev_nvme.c:4605:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:48.642 Running I/O for 1 seconds... 00:21:48.642 18289.00 IOPS, 71.44 MiB/s 00:21:48.642 Latency(us) 00:21:48.642 [2024-10-30T11:33:21.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.642 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:48.642 NVMe0n1 : 1.01 18323.66 71.58 0.00 0.00 6974.15 5922.51 19806.44 00:21:48.642 [2024-10-30T11:33:21.323Z] =================================================================================================================== 00:21:48.642 [2024-10-30T11:33:21.323Z] Total : 18323.66 71.58 0.00 0.00 6974.15 5922.51 19806.44 00:21:48.642 Received shutdown signal, test time was about 1.000000 seconds 00:21:48.642 00:21:48.642 Latency(us) 00:21:48.642 [2024-10-30T11:33:21.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.642 [2024-10-30T11:33:21.323Z] =================================================================================================================== 00:21:48.642 [2024-10-30T11:33:21.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.642 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:48.642 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.899 rmmod nvme_tcp 00:21:48.899 rmmod nvme_fabrics 00:21:48.899 rmmod nvme_keyring 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 666225 ']' 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 666225 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 666225 ']' 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 666225 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 666225 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 666225' 00:21:48.899 killing process with pid 666225 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 666225 00:21:48.899 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 666225 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.158 12:33:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:51.687 00:21:51.687 real 0m7.520s 00:21:51.687 user 0m11.020s 00:21:51.687 sys 0m2.407s 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.687 ************************************ 00:21:51.687 END TEST nvmf_multicontroller 00:21:51.687 ************************************ 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.687 ************************************ 00:21:51.687 START TEST nvmf_aer 00:21:51.687 ************************************ 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:51.687 * Looking for test storage... 00:21:51.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:51.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.687 --rc genhtml_branch_coverage=1 00:21:51.687 --rc genhtml_function_coverage=1 00:21:51.687 --rc genhtml_legend=1 00:21:51.687 --rc geninfo_all_blocks=1 00:21:51.687 --rc geninfo_unexecuted_blocks=1 00:21:51.687 00:21:51.687 ' 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:51.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.687 --rc genhtml_branch_coverage=1 00:21:51.687 --rc genhtml_function_coverage=1 00:21:51.687 --rc genhtml_legend=1 00:21:51.687 --rc geninfo_all_blocks=1 00:21:51.687 --rc geninfo_unexecuted_blocks=1 00:21:51.687 00:21:51.687 ' 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:51.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.687 --rc genhtml_branch_coverage=1 00:21:51.687 --rc genhtml_function_coverage=1 00:21:51.687 --rc genhtml_legend=1 00:21:51.687 --rc geninfo_all_blocks=1 00:21:51.687 --rc geninfo_unexecuted_blocks=1 00:21:51.687 00:21:51.687 ' 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:51.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.687 --rc genhtml_branch_coverage=1 00:21:51.687 --rc genhtml_function_coverage=1 00:21:51.687 --rc genhtml_legend=1 00:21:51.687 --rc geninfo_all_blocks=1 00:21:51.687 --rc geninfo_unexecuted_blocks=1 00:21:51.687 00:21:51.687 ' 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.687 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.688 12:33:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:53.656 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:53.656 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:53.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:53.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:53.656 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:53.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:21:53.657 00:21:53.657 --- 10.0.0.2 ping statistics --- 00:21:53.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.657 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:21:53.657 00:21:53.657 --- 10.0.0.1 ping statistics --- 00:21:53.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.657 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=668475 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 668475 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 668475 ']' 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:53.657 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.657 [2024-10-30 12:33:26.288444] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:53.657 [2024-10-30 12:33:26.288534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.915 [2024-10-30 12:33:26.363073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:53.915 [2024-10-30 12:33:26.421162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.915 [2024-10-30 12:33:26.421221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.915 [2024-10-30 12:33:26.421249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.915 [2024-10-30 12:33:26.421267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.915 [2024-10-30 12:33:26.421277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.915 [2024-10-30 12:33:26.422809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.915 [2024-10-30 12:33:26.422875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.915 [2024-10-30 12:33:26.422942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.915 [2024-10-30 12:33:26.422945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.915 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:53.915 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:21:53.915 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:53.915 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:53.915 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.915 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.915 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:53.915 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.916 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.916 [2024-10-30 12:33:26.565800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.916 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.916 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:53.916 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.916 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.174 Malloc0 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.174 [2024-10-30 12:33:26.638686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.174 [ 00:21:54.174 { 00:21:54.174 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:54.174 "subtype": "Discovery", 00:21:54.174 "listen_addresses": [], 00:21:54.174 "allow_any_host": true, 00:21:54.174 "hosts": [] 00:21:54.174 }, 00:21:54.174 { 00:21:54.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.174 "subtype": "NVMe", 00:21:54.174 "listen_addresses": [ 00:21:54.174 { 00:21:54.174 "trtype": "TCP", 00:21:54.174 "adrfam": "IPv4", 00:21:54.174 "traddr": "10.0.0.2", 00:21:54.174 "trsvcid": "4420" 00:21:54.174 } 00:21:54.174 ], 00:21:54.174 "allow_any_host": true, 00:21:54.174 "hosts": [], 00:21:54.174 "serial_number": "SPDK00000000000001", 00:21:54.174 "model_number": "SPDK bdev Controller", 00:21:54.174 "max_namespaces": 2, 00:21:54.174 "min_cntlid": 1, 00:21:54.174 "max_cntlid": 65519, 00:21:54.174 "namespaces": [ 00:21:54.174 { 00:21:54.174 "nsid": 1, 00:21:54.174 "bdev_name": "Malloc0", 00:21:54.174 "name": "Malloc0", 00:21:54.174 "nguid": "401368A8C43245ADB9016531751602BA", 00:21:54.174 "uuid": "401368a8-c432-45ad-b901-6531751602ba" 00:21:54.174 } 00:21:54.174 ] 00:21:54.174 } 00:21:54.174 ] 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=668620 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:21:54.174 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:54.432 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.432 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:21:54.432 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:21:54.432 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:54.432 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.433 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:54.433 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:21:54.433 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:54.433 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.433 12:33:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.433 Malloc1 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.433 [ 00:21:54.433 { 00:21:54.433 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:54.433 "subtype": "Discovery", 00:21:54.433 "listen_addresses": [], 00:21:54.433 "allow_any_host": true, 00:21:54.433 "hosts": [] 00:21:54.433 }, 00:21:54.433 { 00:21:54.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.433 "subtype": "NVMe", 00:21:54.433 "listen_addresses": [ 00:21:54.433 { 00:21:54.433 "trtype": "TCP", 00:21:54.433 "adrfam": "IPv4", 00:21:54.433 "traddr": "10.0.0.2", 00:21:54.433 "trsvcid": "4420" 00:21:54.433 } 00:21:54.433 ], 00:21:54.433 "allow_any_host": true, 00:21:54.433 "hosts": [], 00:21:54.433 "serial_number": "SPDK00000000000001", 00:21:54.433 "model_number": "SPDK bdev Controller", 00:21:54.433 "max_namespaces": 2, 00:21:54.433 "min_cntlid": 1, 00:21:54.433 "max_cntlid": 65519, 00:21:54.433 "namespaces": [ 00:21:54.433 { 00:21:54.433 "nsid": 1, 00:21:54.433 "bdev_name": "Malloc0", 00:21:54.433 "name": "Malloc0", 00:21:54.433 "nguid": "401368A8C43245ADB9016531751602BA", 00:21:54.433 "uuid": "401368a8-c432-45ad-b901-6531751602ba" 00:21:54.433 }, 00:21:54.433 { 00:21:54.433 "nsid": 2, 00:21:54.433 "bdev_name": "Malloc1", 00:21:54.433 "name": "Malloc1", 00:21:54.433 "nguid": "3BBC40D16F494915A18B95647BD064B3", 00:21:54.433 "uuid": "3bbc40d1-6f49-4915-a18b-95647bd064b3" 00:21:54.433 } 00:21:54.433 ] 00:21:54.433 } 00:21:54.433 ] 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 668620 00:21:54.433 Asynchronous Event Request test 00:21:54.433 Attaching to 10.0.0.2 00:21:54.433 Attached to 10.0.0.2 00:21:54.433 Registering asynchronous event callbacks... 00:21:54.433 Starting namespace attribute notice tests for all controllers... 00:21:54.433 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:54.433 aer_cb - Changed Namespace 00:21:54.433 Cleaning up... 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.433 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.691 rmmod nvme_tcp 00:21:54.691 rmmod nvme_fabrics 00:21:54.691 rmmod nvme_keyring 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 668475 ']' 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 668475 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 668475 ']' 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 668475 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 668475 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 668475' 00:21:54.691 killing process with pid 668475 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 668475 00:21:54.691 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 668475 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.949 12:33:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.855 12:33:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:56.855 00:21:56.855 real 0m5.691s 00:21:56.855 user 0m4.850s 00:21:56.855 sys 0m2.051s 00:21:56.855 12:33:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:56.855 12:33:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.855 ************************************ 00:21:56.855 END TEST nvmf_aer 00:21:56.855 ************************************ 00:21:56.855 12:33:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:56.855 12:33:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:56.855 12:33:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:56.855 12:33:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.119 ************************************ 00:21:57.119 START TEST nvmf_async_init 00:21:57.119 ************************************ 00:21:57.119 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:57.119 * Looking for test storage... 00:21:57.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.119 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:57.119 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:21:57.119 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:57.119 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:57.119 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.119 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.119 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:57.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.120 --rc genhtml_branch_coverage=1 00:21:57.120 --rc genhtml_function_coverage=1 00:21:57.120 --rc genhtml_legend=1 00:21:57.120 --rc geninfo_all_blocks=1 00:21:57.120 --rc geninfo_unexecuted_blocks=1 00:21:57.120 00:21:57.120 ' 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:57.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.120 --rc genhtml_branch_coverage=1 00:21:57.120 --rc genhtml_function_coverage=1 00:21:57.120 --rc genhtml_legend=1 00:21:57.120 --rc geninfo_all_blocks=1 00:21:57.120 --rc geninfo_unexecuted_blocks=1 00:21:57.120 00:21:57.120 ' 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:57.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.120 --rc genhtml_branch_coverage=1 00:21:57.120 --rc genhtml_function_coverage=1 00:21:57.120 --rc genhtml_legend=1 00:21:57.120 --rc geninfo_all_blocks=1 00:21:57.120 --rc geninfo_unexecuted_blocks=1 00:21:57.120 00:21:57.120 ' 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:57.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.120 --rc genhtml_branch_coverage=1 00:21:57.120 --rc genhtml_function_coverage=1 00:21:57.120 --rc genhtml_legend=1 00:21:57.120 --rc geninfo_all_blocks=1 00:21:57.120 --rc geninfo_unexecuted_blocks=1 00:21:57.120 00:21:57.120 ' 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.120 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1b5a7c9d1820453da730869d3dcef9e1 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.121 12:33:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:59.656 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:59.656 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:59.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:59.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.656 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:59.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:21:59.657 00:21:59.657 --- 10.0.0.2 ping statistics --- 00:21:59.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.657 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:21:59.657 00:21:59.657 --- 10.0.0.1 ping statistics --- 00:21:59.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.657 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=670566 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 670566 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 670566 ']' 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:59.657 12:33:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.657 [2024-10-30 12:33:32.010864] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:21:59.657 [2024-10-30 12:33:32.010955] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.657 [2024-10-30 12:33:32.083161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.657 [2024-10-30 12:33:32.138222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.657 [2024-10-30 12:33:32.138293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.657 [2024-10-30 12:33:32.138322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.657 [2024-10-30 12:33:32.138334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.657 [2024-10-30 12:33:32.138344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.657 [2024-10-30 12:33:32.138925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.657 [2024-10-30 12:33:32.276832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.657 null0 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1b5a7c9d1820453da730869d3dcef9e1 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.657 [2024-10-30 12:33:32.317066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.657 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.916 nvme0n1 00:21:59.916 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.916 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:59.916 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.916 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.916 [ 00:21:59.916 { 00:21:59.916 "name": "nvme0n1", 00:21:59.916 "aliases": [ 00:21:59.916 "1b5a7c9d-1820-453d-a730-869d3dcef9e1" 00:21:59.916 ], 00:21:59.916 "product_name": "NVMe disk", 00:21:59.916 "block_size": 512, 00:21:59.916 "num_blocks": 2097152, 00:21:59.916 "uuid": "1b5a7c9d-1820-453d-a730-869d3dcef9e1", 00:21:59.916 "numa_id": 0, 00:21:59.916 "assigned_rate_limits": { 00:21:59.916 "rw_ios_per_sec": 0, 00:21:59.916 "rw_mbytes_per_sec": 0, 00:21:59.916 "r_mbytes_per_sec": 0, 00:21:59.916 "w_mbytes_per_sec": 0 00:21:59.916 }, 00:21:59.916 "claimed": false, 00:21:59.916 "zoned": false, 00:21:59.916 "supported_io_types": { 00:21:59.916 "read": true, 00:21:59.916 "write": true, 00:21:59.916 "unmap": false, 00:21:59.916 "flush": true, 00:21:59.916 "reset": true, 00:21:59.916 "nvme_admin": true, 00:21:59.916 "nvme_io": true, 00:21:59.916 "nvme_io_md": false, 00:21:59.916 "write_zeroes": true, 00:21:59.916 "zcopy": false, 00:21:59.916 "get_zone_info": false, 00:21:59.916 "zone_management": false, 00:21:59.916 "zone_append": false, 00:21:59.916 "compare": true, 00:21:59.916 "compare_and_write": true, 00:21:59.916 "abort": true, 00:21:59.916 "seek_hole": false, 00:21:59.916 "seek_data": false, 00:21:59.916 "copy": true, 00:21:59.916 "nvme_iov_md": false 00:21:59.916 }, 00:21:59.916 "memory_domains": [ 00:21:59.916 { 00:21:59.916 "dma_device_id": "system", 00:21:59.916 "dma_device_type": 1 00:21:59.916 } 00:21:59.916 ], 00:21:59.916 "driver_specific": { 00:21:59.916 "nvme": [ 00:21:59.916 { 00:21:59.916 "trid": { 00:21:59.916 "trtype": "TCP", 00:21:59.916 "adrfam": "IPv4", 00:21:59.916 "traddr": "10.0.0.2", 00:21:59.916 "trsvcid": "4420", 00:21:59.916 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:59.916 }, 00:21:59.916 "ctrlr_data": { 00:21:59.916 "cntlid": 1, 00:21:59.916 "vendor_id": "0x8086", 00:21:59.916 "model_number": "SPDK bdev Controller", 00:21:59.916 "serial_number": "00000000000000000000", 00:21:59.916 "firmware_revision": "25.01", 00:21:59.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:59.916 "oacs": { 00:21:59.916 "security": 0, 00:21:59.916 "format": 0, 00:21:59.916 "firmware": 0, 00:21:59.916 "ns_manage": 0 00:21:59.916 }, 00:21:59.916 "multi_ctrlr": true, 00:21:59.916 "ana_reporting": false 00:21:59.916 }, 00:21:59.916 "vs": { 00:21:59.916 "nvme_version": "1.3" 00:21:59.916 }, 00:21:59.916 "ns_data": { 00:21:59.916 "id": 1, 00:21:59.916 "can_share": true 00:21:59.916 } 00:21:59.916 } 00:21:59.916 ], 00:21:59.916 "mp_policy": "active_passive" 00:21:59.916 } 00:21:59.916 } 00:21:59.916 ] 00:21:59.916 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.916 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:59.916 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.916 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.916 [2024-10-30 12:33:32.566140] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:59.916 [2024-10-30 12:33:32.566269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa27b20 (9): Bad file descriptor 00:22:00.176 [2024-10-30 12:33:32.698387] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.176 [ 00:22:00.176 { 00:22:00.176 "name": "nvme0n1", 00:22:00.176 "aliases": [ 00:22:00.176 "1b5a7c9d-1820-453d-a730-869d3dcef9e1" 00:22:00.176 ], 00:22:00.176 "product_name": "NVMe disk", 00:22:00.176 "block_size": 512, 00:22:00.176 "num_blocks": 2097152, 00:22:00.176 "uuid": "1b5a7c9d-1820-453d-a730-869d3dcef9e1", 00:22:00.176 "numa_id": 0, 00:22:00.176 "assigned_rate_limits": { 00:22:00.176 "rw_ios_per_sec": 0, 00:22:00.176 "rw_mbytes_per_sec": 0, 00:22:00.176 "r_mbytes_per_sec": 0, 00:22:00.176 "w_mbytes_per_sec": 0 00:22:00.176 }, 00:22:00.176 "claimed": false, 00:22:00.176 "zoned": false, 00:22:00.176 "supported_io_types": { 00:22:00.176 "read": true, 00:22:00.176 "write": true, 00:22:00.176 "unmap": false, 00:22:00.176 "flush": true, 00:22:00.176 "reset": true, 00:22:00.176 "nvme_admin": true, 00:22:00.176 "nvme_io": true, 00:22:00.176 "nvme_io_md": false, 00:22:00.176 "write_zeroes": true, 00:22:00.176 "zcopy": false, 00:22:00.176 "get_zone_info": false, 00:22:00.176 "zone_management": false, 00:22:00.176 "zone_append": false, 00:22:00.176 "compare": true, 00:22:00.176 "compare_and_write": true, 00:22:00.176 "abort": true, 00:22:00.176 "seek_hole": false, 00:22:00.176 "seek_data": false, 00:22:00.176 "copy": true, 00:22:00.176 "nvme_iov_md": false 00:22:00.176 }, 00:22:00.176 "memory_domains": [ 00:22:00.176 { 00:22:00.176 "dma_device_id": "system", 00:22:00.176 "dma_device_type": 1 00:22:00.176 } 00:22:00.176 ], 00:22:00.176 "driver_specific": { 00:22:00.176 "nvme": [ 00:22:00.176 { 00:22:00.176 "trid": { 00:22:00.176 "trtype": "TCP", 00:22:00.176 "adrfam": "IPv4", 00:22:00.176 "traddr": "10.0.0.2", 00:22:00.176 "trsvcid": "4420", 00:22:00.176 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:00.176 }, 00:22:00.176 "ctrlr_data": { 00:22:00.176 "cntlid": 2, 00:22:00.176 "vendor_id": "0x8086", 00:22:00.176 "model_number": "SPDK bdev Controller", 00:22:00.176 "serial_number": "00000000000000000000", 00:22:00.176 "firmware_revision": "25.01", 00:22:00.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:00.176 "oacs": { 00:22:00.176 "security": 0, 00:22:00.176 "format": 0, 00:22:00.176 "firmware": 0, 00:22:00.176 "ns_manage": 0 00:22:00.176 }, 00:22:00.176 "multi_ctrlr": true, 00:22:00.176 "ana_reporting": false 00:22:00.176 }, 00:22:00.176 "vs": { 00:22:00.176 "nvme_version": "1.3" 00:22:00.176 }, 00:22:00.176 "ns_data": { 00:22:00.176 "id": 1, 00:22:00.176 "can_share": true 00:22:00.176 } 00:22:00.176 } 00:22:00.176 ], 00:22:00.176 "mp_policy": "active_passive" 00:22:00.176 } 00:22:00.176 } 00:22:00.176 ] 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Kh257Nap6g 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Kh257Nap6g 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Kh257Nap6g 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.176 [2024-10-30 12:33:32.750732] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:00.176 [2024-10-30 12:33:32.750851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.176 [2024-10-30 12:33:32.766776] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.176 nvme0n1 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.176 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.176 [ 00:22:00.176 { 00:22:00.176 "name": "nvme0n1", 00:22:00.177 "aliases": [ 00:22:00.177 "1b5a7c9d-1820-453d-a730-869d3dcef9e1" 00:22:00.177 ], 00:22:00.177 "product_name": "NVMe disk", 00:22:00.177 "block_size": 512, 00:22:00.177 "num_blocks": 2097152, 00:22:00.177 "uuid": "1b5a7c9d-1820-453d-a730-869d3dcef9e1", 00:22:00.177 "numa_id": 0, 00:22:00.177 "assigned_rate_limits": { 00:22:00.177 "rw_ios_per_sec": 0, 00:22:00.177 "rw_mbytes_per_sec": 0, 00:22:00.177 "r_mbytes_per_sec": 0, 00:22:00.177 "w_mbytes_per_sec": 0 00:22:00.177 }, 00:22:00.177 "claimed": false, 00:22:00.177 "zoned": false, 00:22:00.177 "supported_io_types": { 00:22:00.177 "read": true, 00:22:00.177 "write": true, 00:22:00.177 "unmap": false, 00:22:00.177 "flush": true, 00:22:00.177 "reset": true, 00:22:00.177 "nvme_admin": true, 00:22:00.177 "nvme_io": true, 00:22:00.177 "nvme_io_md": false, 00:22:00.177 "write_zeroes": true, 00:22:00.177 "zcopy": false, 00:22:00.177 "get_zone_info": false, 00:22:00.177 "zone_management": false, 00:22:00.177 "zone_append": false, 00:22:00.177 "compare": true, 00:22:00.177 "compare_and_write": true, 00:22:00.177 "abort": true, 00:22:00.177 "seek_hole": false, 00:22:00.177 "seek_data": false, 00:22:00.177 "copy": true, 00:22:00.177 "nvme_iov_md": false 00:22:00.177 }, 00:22:00.177 "memory_domains": [ 00:22:00.177 { 00:22:00.177 "dma_device_id": "system", 00:22:00.177 "dma_device_type": 1 00:22:00.177 } 00:22:00.177 ], 00:22:00.177 "driver_specific": { 00:22:00.177 "nvme": [ 00:22:00.177 { 00:22:00.177 "trid": { 00:22:00.177 "trtype": "TCP", 00:22:00.177 "adrfam": "IPv4", 00:22:00.177 "traddr": "10.0.0.2", 00:22:00.177 "trsvcid": "4421", 00:22:00.177 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:00.177 }, 00:22:00.177 "ctrlr_data": { 00:22:00.177 "cntlid": 3, 00:22:00.177 "vendor_id": "0x8086", 00:22:00.177 "model_number": "SPDK bdev Controller", 00:22:00.177 "serial_number": "00000000000000000000", 00:22:00.177 "firmware_revision": "25.01", 00:22:00.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:00.177 "oacs": { 00:22:00.177 "security": 0, 00:22:00.177 "format": 0, 00:22:00.177 "firmware": 0, 00:22:00.177 "ns_manage": 0 00:22:00.177 }, 00:22:00.177 "multi_ctrlr": true, 00:22:00.177 "ana_reporting": false 00:22:00.177 }, 00:22:00.177 "vs": { 00:22:00.177 "nvme_version": "1.3" 00:22:00.177 }, 00:22:00.177 "ns_data": { 00:22:00.177 "id": 1, 00:22:00.177 "can_share": true 00:22:00.177 } 00:22:00.177 } 00:22:00.177 ], 00:22:00.177 "mp_policy": "active_passive" 00:22:00.177 } 00:22:00.177 } 00:22:00.177 ] 00:22:00.177 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.177 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.177 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.177 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Kh257Nap6g 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:00.436 rmmod nvme_tcp 00:22:00.436 rmmod nvme_fabrics 00:22:00.436 rmmod nvme_keyring 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 670566 ']' 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 670566 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 670566 ']' 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 670566 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 670566 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 670566' 00:22:00.436 killing process with pid 670566 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 670566 00:22:00.436 12:33:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 670566 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.696 12:33:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.603 00:22:02.603 real 0m5.618s 00:22:02.603 user 0m2.131s 00:22:02.603 sys 0m1.909s 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.603 ************************************ 00:22:02.603 END TEST nvmf_async_init 00:22:02.603 ************************************ 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.603 ************************************ 00:22:02.603 START TEST dma 00:22:02.603 ************************************ 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:02.603 * Looking for test storage... 00:22:02.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:22:02.603 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:02.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.862 --rc genhtml_branch_coverage=1 00:22:02.862 --rc genhtml_function_coverage=1 00:22:02.862 --rc genhtml_legend=1 00:22:02.862 --rc geninfo_all_blocks=1 00:22:02.862 --rc geninfo_unexecuted_blocks=1 00:22:02.862 00:22:02.862 ' 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:02.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.862 --rc genhtml_branch_coverage=1 00:22:02.862 --rc genhtml_function_coverage=1 00:22:02.862 --rc genhtml_legend=1 00:22:02.862 --rc geninfo_all_blocks=1 00:22:02.862 --rc geninfo_unexecuted_blocks=1 00:22:02.862 00:22:02.862 ' 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:02.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.862 --rc genhtml_branch_coverage=1 00:22:02.862 --rc genhtml_function_coverage=1 00:22:02.862 --rc genhtml_legend=1 00:22:02.862 --rc geninfo_all_blocks=1 00:22:02.862 --rc geninfo_unexecuted_blocks=1 00:22:02.862 00:22:02.862 ' 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:02.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.862 --rc genhtml_branch_coverage=1 00:22:02.862 --rc genhtml_function_coverage=1 00:22:02.862 --rc genhtml_legend=1 00:22:02.862 --rc geninfo_all_blocks=1 00:22:02.862 --rc geninfo_unexecuted_blocks=1 00:22:02.862 00:22:02.862 ' 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.862 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:02.863 00:22:02.863 real 0m0.170s 00:22:02.863 user 0m0.112s 00:22:02.863 sys 0m0.067s 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:02.863 ************************************ 00:22:02.863 END TEST dma 00:22:02.863 ************************************ 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.863 ************************************ 00:22:02.863 START TEST nvmf_identify 00:22:02.863 ************************************ 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:02.863 * Looking for test storage... 00:22:02.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:22:02.863 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:03.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.123 --rc genhtml_branch_coverage=1 00:22:03.123 --rc genhtml_function_coverage=1 00:22:03.123 --rc genhtml_legend=1 00:22:03.123 --rc geninfo_all_blocks=1 00:22:03.123 --rc geninfo_unexecuted_blocks=1 00:22:03.123 00:22:03.123 ' 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:03.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.123 --rc genhtml_branch_coverage=1 00:22:03.123 --rc genhtml_function_coverage=1 00:22:03.123 --rc genhtml_legend=1 00:22:03.123 --rc geninfo_all_blocks=1 00:22:03.123 --rc geninfo_unexecuted_blocks=1 00:22:03.123 00:22:03.123 ' 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:03.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.123 --rc genhtml_branch_coverage=1 00:22:03.123 --rc genhtml_function_coverage=1 00:22:03.123 --rc genhtml_legend=1 00:22:03.123 --rc geninfo_all_blocks=1 00:22:03.123 --rc geninfo_unexecuted_blocks=1 00:22:03.123 00:22:03.123 ' 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:03.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.123 --rc genhtml_branch_coverage=1 00:22:03.123 --rc genhtml_function_coverage=1 00:22:03.123 --rc genhtml_legend=1 00:22:03.123 --rc geninfo_all_blocks=1 00:22:03.123 --rc geninfo_unexecuted_blocks=1 00:22:03.123 00:22:03.123 ' 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.123 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:03.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:03.124 12:33:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.669 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:05.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:05.670 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:05.670 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:05.670 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.670 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:05.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:22:05.671 00:22:05.671 --- 10.0.0.2 ping statistics --- 00:22:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.671 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:22:05.671 00:22:05.671 --- 10.0.0.1 ping statistics --- 00:22:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.671 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=672714 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 672714 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 672714 ']' 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:05.671 12:33:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.671 [2024-10-30 12:33:38.002727] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:22:05.671 [2024-10-30 12:33:38.002797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.671 [2024-10-30 12:33:38.076206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.671 [2024-10-30 12:33:38.136515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.671 [2024-10-30 12:33:38.136581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.671 [2024-10-30 12:33:38.136594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.671 [2024-10-30 12:33:38.136605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.671 [2024-10-30 12:33:38.136615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.671 [2024-10-30 12:33:38.138187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.671 [2024-10-30 12:33:38.138284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.671 [2024-10-30 12:33:38.138327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.671 [2024-10-30 12:33:38.138331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.671 [2024-10-30 12:33:38.262708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.671 Malloc0 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:05.671 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.672 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.672 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.672 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.672 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.672 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.935 [2024-10-30 12:33:38.353448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.935 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.935 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:05.935 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.935 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.935 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.935 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:05.935 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.935 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.935 [ 00:22:05.935 { 00:22:05.935 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:05.935 "subtype": "Discovery", 00:22:05.935 "listen_addresses": [ 00:22:05.935 { 00:22:05.935 "trtype": "TCP", 00:22:05.935 "adrfam": "IPv4", 00:22:05.935 "traddr": "10.0.0.2", 00:22:05.935 "trsvcid": "4420" 00:22:05.935 } 00:22:05.935 ], 00:22:05.935 "allow_any_host": true, 00:22:05.935 "hosts": [] 00:22:05.935 }, 00:22:05.935 { 00:22:05.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.935 "subtype": "NVMe", 00:22:05.935 "listen_addresses": [ 00:22:05.935 { 00:22:05.935 "trtype": "TCP", 00:22:05.935 "adrfam": "IPv4", 00:22:05.935 "traddr": "10.0.0.2", 00:22:05.935 "trsvcid": "4420" 00:22:05.935 } 00:22:05.935 ], 00:22:05.935 "allow_any_host": true, 00:22:05.935 "hosts": [], 00:22:05.935 "serial_number": "SPDK00000000000001", 00:22:05.935 "model_number": "SPDK bdev Controller", 00:22:05.935 "max_namespaces": 32, 00:22:05.935 "min_cntlid": 1, 00:22:05.935 "max_cntlid": 65519, 00:22:05.935 "namespaces": [ 00:22:05.935 { 00:22:05.935 "nsid": 1, 00:22:05.935 "bdev_name": "Malloc0", 00:22:05.935 "name": "Malloc0", 00:22:05.935 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:05.935 "eui64": "ABCDEF0123456789", 00:22:05.935 "uuid": "7fd7cf17-0af0-43fe-a298-63e793406842" 00:22:05.935 } 00:22:05.935 ] 00:22:05.935 } 00:22:05.935 ] 00:22:05.935 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.935 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:05.935 [2024-10-30 12:33:38.398103] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:22:05.935 [2024-10-30 12:33:38.398156] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672856 ] 00:22:05.935 [2024-10-30 12:33:38.460652] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:05.935 [2024-10-30 12:33:38.460720] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:05.935 [2024-10-30 12:33:38.460730] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:05.935 [2024-10-30 12:33:38.460748] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:05.935 [2024-10-30 12:33:38.460762] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:05.935 [2024-10-30 12:33:38.461414] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:05.935 [2024-10-30 12:33:38.461468] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fff690 0 00:22:05.935 [2024-10-30 12:33:38.467269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:05.935 [2024-10-30 12:33:38.467306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:05.935 [2024-10-30 12:33:38.467315] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:05.935 [2024-10-30 12:33:38.467322] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:05.935 [2024-10-30 12:33:38.467365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.467378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.467385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fff690) 00:22:05.935 [2024-10-30 12:33:38.467403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:05.935 [2024-10-30 12:33:38.467430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061100, cid 0, qid 0 00:22:05.935 [2024-10-30 12:33:38.475274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.935 [2024-10-30 12:33:38.475292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.935 [2024-10-30 12:33:38.475300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061100) on tqpair=0x1fff690 00:22:05.935 [2024-10-30 12:33:38.475332] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:05.935 [2024-10-30 12:33:38.475346] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:05.935 [2024-10-30 12:33:38.475355] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:05.935 [2024-10-30 12:33:38.475376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fff690) 00:22:05.935 [2024-10-30 12:33:38.475403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.935 [2024-10-30 12:33:38.475426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061100, cid 0, qid 0 00:22:05.935 [2024-10-30 12:33:38.475524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.935 [2024-10-30 12:33:38.475538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.935 [2024-10-30 12:33:38.475545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061100) on tqpair=0x1fff690 00:22:05.935 [2024-10-30 12:33:38.475561] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:05.935 [2024-10-30 12:33:38.475573] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:05.935 [2024-10-30 12:33:38.475585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fff690) 00:22:05.935 [2024-10-30 12:33:38.475609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.935 [2024-10-30 12:33:38.475630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061100, cid 0, qid 0 00:22:05.935 [2024-10-30 12:33:38.475708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.935 [2024-10-30 12:33:38.475719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.935 [2024-10-30 12:33:38.475726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061100) on tqpair=0x1fff690 00:22:05.935 [2024-10-30 12:33:38.475741] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:05.935 [2024-10-30 12:33:38.475754] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:05.935 [2024-10-30 12:33:38.475766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fff690) 00:22:05.935 [2024-10-30 12:33:38.475789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.935 [2024-10-30 12:33:38.475810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061100, cid 0, qid 0 00:22:05.935 [2024-10-30 12:33:38.475882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.935 [2024-10-30 12:33:38.475895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.935 [2024-10-30 12:33:38.475902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061100) on tqpair=0x1fff690 00:22:05.935 [2024-10-30 12:33:38.475921] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:05.935 [2024-10-30 12:33:38.475943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.935 [2024-10-30 12:33:38.475959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fff690) 00:22:05.935 [2024-10-30 12:33:38.475970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.935 [2024-10-30 12:33:38.475991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061100, cid 0, qid 0 00:22:05.936 [2024-10-30 12:33:38.476066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.936 [2024-10-30 12:33:38.476080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.936 [2024-10-30 12:33:38.476086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061100) on tqpair=0x1fff690 00:22:05.936 [2024-10-30 12:33:38.476101] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:05.936 [2024-10-30 12:33:38.476109] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:05.936 [2024-10-30 12:33:38.476122] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:05.936 [2024-10-30 12:33:38.476232] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:05.936 [2024-10-30 12:33:38.476240] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:05.936 [2024-10-30 12:33:38.476253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fff690) 00:22:05.936 [2024-10-30 12:33:38.476287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.936 [2024-10-30 12:33:38.476308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061100, cid 0, qid 0 00:22:05.936 [2024-10-30 12:33:38.476426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.936 [2024-10-30 12:33:38.476440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.936 [2024-10-30 12:33:38.476447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061100) on tqpair=0x1fff690 00:22:05.936 [2024-10-30 12:33:38.476461] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:05.936 [2024-10-30 12:33:38.476477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fff690) 00:22:05.936 [2024-10-30 12:33:38.476502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.936 [2024-10-30 12:33:38.476522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061100, cid 0, qid 0 00:22:05.936 [2024-10-30 12:33:38.476593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.936 [2024-10-30 12:33:38.476605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.936 [2024-10-30 12:33:38.476617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061100) on tqpair=0x1fff690 00:22:05.936 [2024-10-30 12:33:38.476633] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:05.936 [2024-10-30 12:33:38.476641] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:05.936 [2024-10-30 12:33:38.476654] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:05.936 [2024-10-30 12:33:38.476672] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:05.936 [2024-10-30 12:33:38.476688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fff690) 00:22:05.936 [2024-10-30 12:33:38.476706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.936 [2024-10-30 12:33:38.476727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061100, cid 0, qid 0 00:22:05.936 [2024-10-30 12:33:38.476851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.936 [2024-10-30 12:33:38.476866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.936 [2024-10-30 12:33:38.476872] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476878] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fff690): datao=0, datal=4096, cccid=0 00:22:05.936 [2024-10-30 12:33:38.476886] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2061100) on tqpair(0x1fff690): expected_datao=0, payload_size=4096 00:22:05.936 [2024-10-30 12:33:38.476893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476911] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.476920] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.936 [2024-10-30 12:33:38.517353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.936 [2024-10-30 12:33:38.517361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061100) on tqpair=0x1fff690 00:22:05.936 [2024-10-30 12:33:38.517380] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:05.936 [2024-10-30 12:33:38.517389] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:05.936 [2024-10-30 12:33:38.517397] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:05.936 [2024-10-30 12:33:38.517406] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:05.936 [2024-10-30 12:33:38.517413] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:05.936 [2024-10-30 12:33:38.517421] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:05.936 [2024-10-30 12:33:38.517435] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:05.936 [2024-10-30 12:33:38.517448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fff690) 00:22:05.936 [2024-10-30 12:33:38.517478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:05.936 [2024-10-30 12:33:38.517502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061100, cid 0, qid 0 00:22:05.936 [2024-10-30 12:33:38.517595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.936 [2024-10-30 12:33:38.517607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.936 [2024-10-30 12:33:38.517614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061100) on tqpair=0x1fff690 00:22:05.936 [2024-10-30 12:33:38.517637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fff690) 00:22:05.936 [2024-10-30 12:33:38.517662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.936 [2024-10-30 12:33:38.517672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fff690) 00:22:05.936 [2024-10-30 12:33:38.517693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.936 [2024-10-30 12:33:38.517702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fff690) 00:22:05.936 [2024-10-30 12:33:38.517723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.936 [2024-10-30 12:33:38.517733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.936 [2024-10-30 12:33:38.517754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.936 [2024-10-30 12:33:38.517763] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:05.936 [2024-10-30 12:33:38.517777] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:05.936 [2024-10-30 12:33:38.517788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.517795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fff690) 00:22:05.936 [2024-10-30 12:33:38.517805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.936 [2024-10-30 12:33:38.517827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061100, cid 0, qid 0 00:22:05.936 [2024-10-30 12:33:38.517837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061280, cid 1, qid 0 00:22:05.936 [2024-10-30 12:33:38.517845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061400, cid 2, qid 0 00:22:05.936 [2024-10-30 12:33:38.517852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.936 [2024-10-30 12:33:38.517859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061700, cid 4, qid 0 00:22:05.936 [2024-10-30 12:33:38.518003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.936 [2024-10-30 12:33:38.518019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.936 [2024-10-30 12:33:38.518026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.518033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061700) on tqpair=0x1fff690 00:22:05.936 [2024-10-30 12:33:38.518046] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:05.936 [2024-10-30 12:33:38.518056] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:05.936 [2024-10-30 12:33:38.518073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.936 [2024-10-30 12:33:38.518082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fff690) 00:22:05.936 [2024-10-30 12:33:38.518093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.936 [2024-10-30 12:33:38.518113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061700, cid 4, qid 0 00:22:05.936 [2024-10-30 12:33:38.518199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.937 [2024-10-30 12:33:38.518213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.937 [2024-10-30 12:33:38.518219] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518225] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fff690): datao=0, datal=4096, cccid=4 00:22:05.937 [2024-10-30 12:33:38.518233] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2061700) on tqpair(0x1fff690): expected_datao=0, payload_size=4096 00:22:05.937 [2024-10-30 12:33:38.518240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518263] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518274] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.937 [2024-10-30 12:33:38.518295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.937 [2024-10-30 12:33:38.518301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061700) on tqpair=0x1fff690 00:22:05.937 [2024-10-30 12:33:38.518325] nvme_ctrlr.c:4166:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:05.937 [2024-10-30 12:33:38.518364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fff690) 00:22:05.937 [2024-10-30 12:33:38.518385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.937 [2024-10-30 12:33:38.518396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fff690) 00:22:05.937 [2024-10-30 12:33:38.518418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.937 [2024-10-30 12:33:38.518444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061700, cid 4, qid 0 00:22:05.937 [2024-10-30 12:33:38.518456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061880, cid 5, qid 0 00:22:05.937 [2024-10-30 12:33:38.518599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.937 [2024-10-30 12:33:38.518613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.937 [2024-10-30 12:33:38.518619] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518625] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fff690): datao=0, datal=1024, cccid=4 00:22:05.937 [2024-10-30 12:33:38.518637] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2061700) on tqpair(0x1fff690): expected_datao=0, payload_size=1024 00:22:05.937 [2024-10-30 12:33:38.518645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518655] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518662] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.937 [2024-10-30 12:33:38.518679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.937 [2024-10-30 12:33:38.518685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.518691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061880) on tqpair=0x1fff690 00:22:05.937 [2024-10-30 12:33:38.561281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.937 [2024-10-30 12:33:38.561301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.937 [2024-10-30 12:33:38.561309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061700) on tqpair=0x1fff690 00:22:05.937 [2024-10-30 12:33:38.561334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fff690) 00:22:05.937 [2024-10-30 12:33:38.561355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.937 [2024-10-30 12:33:38.561386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061700, cid 4, qid 0 00:22:05.937 [2024-10-30 12:33:38.561498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.937 [2024-10-30 12:33:38.561513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.937 [2024-10-30 12:33:38.561520] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561526] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fff690): datao=0, datal=3072, cccid=4 00:22:05.937 [2024-10-30 12:33:38.561533] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2061700) on tqpair(0x1fff690): expected_datao=0, payload_size=3072 00:22:05.937 [2024-10-30 12:33:38.561540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561551] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561558] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.937 [2024-10-30 12:33:38.561578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.937 [2024-10-30 12:33:38.561585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061700) on tqpair=0x1fff690 00:22:05.937 [2024-10-30 12:33:38.561606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fff690) 00:22:05.937 [2024-10-30 12:33:38.561624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.937 [2024-10-30 12:33:38.561652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061700, cid 4, qid 0 00:22:05.937 [2024-10-30 12:33:38.561744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:05.937 [2024-10-30 12:33:38.561755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:05.937 [2024-10-30 12:33:38.561762] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561768] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fff690): datao=0, datal=8, cccid=4 00:22:05.937 [2024-10-30 12:33:38.561780] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2061700) on tqpair(0x1fff690): expected_datao=0, payload_size=8 00:22:05.937 [2024-10-30 12:33:38.561788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561798] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.561805] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.602336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.937 [2024-10-30 12:33:38.602355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.937 [2024-10-30 12:33:38.602362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.937 [2024-10-30 12:33:38.602369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061700) on tqpair=0x1fff690 00:22:05.937 ===================================================== 00:22:05.937 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:05.937 ===================================================== 00:22:05.937 Controller Capabilities/Features 00:22:05.937 ================================ 00:22:05.937 Vendor ID: 0000 00:22:05.937 Subsystem Vendor ID: 0000 00:22:05.937 Serial Number: .................... 00:22:05.937 Model Number: ........................................ 00:22:05.937 Firmware Version: 25.01 00:22:05.937 Recommended Arb Burst: 0 00:22:05.937 IEEE OUI Identifier: 00 00 00 00:22:05.937 Multi-path I/O 00:22:05.937 May have multiple subsystem ports: No 00:22:05.937 May have multiple controllers: No 00:22:05.937 Associated with SR-IOV VF: No 00:22:05.937 Max Data Transfer Size: 131072 00:22:05.937 Max Number of Namespaces: 0 00:22:05.937 Max Number of I/O Queues: 1024 00:22:05.937 NVMe Specification Version (VS): 1.3 00:22:05.937 NVMe Specification Version (Identify): 1.3 00:22:05.937 Maximum Queue Entries: 128 00:22:05.937 Contiguous Queues Required: Yes 00:22:05.937 Arbitration Mechanisms Supported 00:22:05.937 Weighted Round Robin: Not Supported 00:22:05.937 Vendor Specific: Not Supported 00:22:05.937 Reset Timeout: 15000 ms 00:22:05.937 Doorbell Stride: 4 bytes 00:22:05.937 NVM Subsystem Reset: Not Supported 00:22:05.937 Command Sets Supported 00:22:05.937 NVM Command Set: Supported 00:22:05.937 Boot Partition: Not Supported 00:22:05.937 Memory Page Size Minimum: 4096 bytes 00:22:05.937 Memory Page Size Maximum: 4096 bytes 00:22:05.937 Persistent Memory Region: Not Supported 00:22:05.937 Optional Asynchronous Events Supported 00:22:05.937 Namespace Attribute Notices: Not Supported 00:22:05.937 Firmware Activation Notices: Not Supported 00:22:05.937 ANA Change Notices: Not Supported 00:22:05.937 PLE Aggregate Log Change Notices: Not Supported 00:22:05.937 LBA Status Info Alert Notices: Not Supported 00:22:05.937 EGE Aggregate Log Change Notices: Not Supported 00:22:05.937 Normal NVM Subsystem Shutdown event: Not Supported 00:22:05.937 Zone Descriptor Change Notices: Not Supported 00:22:05.937 Discovery Log Change Notices: Supported 00:22:05.937 Controller Attributes 00:22:05.937 128-bit Host Identifier: Not Supported 00:22:05.937 Non-Operational Permissive Mode: Not Supported 00:22:05.937 NVM Sets: Not Supported 00:22:05.937 Read Recovery Levels: Not Supported 00:22:05.937 Endurance Groups: Not Supported 00:22:05.937 Predictable Latency Mode: Not Supported 00:22:05.937 Traffic Based Keep ALive: Not Supported 00:22:05.937 Namespace Granularity: Not Supported 00:22:05.937 SQ Associations: Not Supported 00:22:05.937 UUID List: Not Supported 00:22:05.937 Multi-Domain Subsystem: Not Supported 00:22:05.937 Fixed Capacity Management: Not Supported 00:22:05.937 Variable Capacity Management: Not Supported 00:22:05.937 Delete Endurance Group: Not Supported 00:22:05.937 Delete NVM Set: Not Supported 00:22:05.937 Extended LBA Formats Supported: Not Supported 00:22:05.937 Flexible Data Placement Supported: Not Supported 00:22:05.937 00:22:05.937 Controller Memory Buffer Support 00:22:05.937 ================================ 00:22:05.937 Supported: No 00:22:05.937 00:22:05.937 Persistent Memory Region Support 00:22:05.937 ================================ 00:22:05.937 Supported: No 00:22:05.937 00:22:05.937 Admin Command Set Attributes 00:22:05.937 ============================ 00:22:05.937 Security Send/Receive: Not Supported 00:22:05.937 Format NVM: Not Supported 00:22:05.938 Firmware Activate/Download: Not Supported 00:22:05.938 Namespace Management: Not Supported 00:22:05.938 Device Self-Test: Not Supported 00:22:05.938 Directives: Not Supported 00:22:05.938 NVMe-MI: Not Supported 00:22:05.938 Virtualization Management: Not Supported 00:22:05.938 Doorbell Buffer Config: Not Supported 00:22:05.938 Get LBA Status Capability: Not Supported 00:22:05.938 Command & Feature Lockdown Capability: Not Supported 00:22:05.938 Abort Command Limit: 1 00:22:05.938 Async Event Request Limit: 4 00:22:05.938 Number of Firmware Slots: N/A 00:22:05.938 Firmware Slot 1 Read-Only: N/A 00:22:05.938 Firmware Activation Without Reset: N/A 00:22:05.938 Multiple Update Detection Support: N/A 00:22:05.938 Firmware Update Granularity: No Information Provided 00:22:05.938 Per-Namespace SMART Log: No 00:22:05.938 Asymmetric Namespace Access Log Page: Not Supported 00:22:05.938 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:05.938 Command Effects Log Page: Not Supported 00:22:05.938 Get Log Page Extended Data: Supported 00:22:05.938 Telemetry Log Pages: Not Supported 00:22:05.938 Persistent Event Log Pages: Not Supported 00:22:05.938 Supported Log Pages Log Page: May Support 00:22:05.938 Commands Supported & Effects Log Page: Not Supported 00:22:05.938 Feature Identifiers & Effects Log Page:May Support 00:22:05.938 NVMe-MI Commands & Effects Log Page: May Support 00:22:05.938 Data Area 4 for Telemetry Log: Not Supported 00:22:05.938 Error Log Page Entries Supported: 128 00:22:05.938 Keep Alive: Not Supported 00:22:05.938 00:22:05.938 NVM Command Set Attributes 00:22:05.938 ========================== 00:22:05.938 Submission Queue Entry Size 00:22:05.938 Max: 1 00:22:05.938 Min: 1 00:22:05.938 Completion Queue Entry Size 00:22:05.938 Max: 1 00:22:05.938 Min: 1 00:22:05.938 Number of Namespaces: 0 00:22:05.938 Compare Command: Not Supported 00:22:05.938 Write Uncorrectable Command: Not Supported 00:22:05.938 Dataset Management Command: Not Supported 00:22:05.938 Write Zeroes Command: Not Supported 00:22:05.938 Set Features Save Field: Not Supported 00:22:05.938 Reservations: Not Supported 00:22:05.938 Timestamp: Not Supported 00:22:05.938 Copy: Not Supported 00:22:05.938 Volatile Write Cache: Not Present 00:22:05.938 Atomic Write Unit (Normal): 1 00:22:05.938 Atomic Write Unit (PFail): 1 00:22:05.938 Atomic Compare & Write Unit: 1 00:22:05.938 Fused Compare & Write: Supported 00:22:05.938 Scatter-Gather List 00:22:05.938 SGL Command Set: Supported 00:22:05.938 SGL Keyed: Supported 00:22:05.938 SGL Bit Bucket Descriptor: Not Supported 00:22:05.938 SGL Metadata Pointer: Not Supported 00:22:05.938 Oversized SGL: Not Supported 00:22:05.938 SGL Metadata Address: Not Supported 00:22:05.938 SGL Offset: Supported 00:22:05.938 Transport SGL Data Block: Not Supported 00:22:05.938 Replay Protected Memory Block: Not Supported 00:22:05.938 00:22:05.938 Firmware Slot Information 00:22:05.938 ========================= 00:22:05.938 Active slot: 0 00:22:05.938 00:22:05.938 00:22:05.938 Error Log 00:22:05.938 ========= 00:22:05.938 00:22:05.938 Active Namespaces 00:22:05.938 ================= 00:22:05.938 Discovery Log Page 00:22:05.938 ================== 00:22:05.938 Generation Counter: 2 00:22:05.938 Number of Records: 2 00:22:05.938 Record Format: 0 00:22:05.938 00:22:05.938 Discovery Log Entry 0 00:22:05.938 ---------------------- 00:22:05.938 Transport Type: 3 (TCP) 00:22:05.938 Address Family: 1 (IPv4) 00:22:05.938 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:05.938 Entry Flags: 00:22:05.938 Duplicate Returned Information: 1 00:22:05.938 Explicit Persistent Connection Support for Discovery: 1 00:22:05.938 Transport Requirements: 00:22:05.938 Secure Channel: Not Required 00:22:05.938 Port ID: 0 (0x0000) 00:22:05.938 Controller ID: 65535 (0xffff) 00:22:05.938 Admin Max SQ Size: 128 00:22:05.938 Transport Service Identifier: 4420 00:22:05.938 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:05.938 Transport Address: 10.0.0.2 00:22:05.938 Discovery Log Entry 1 00:22:05.938 ---------------------- 00:22:05.938 Transport Type: 3 (TCP) 00:22:05.938 Address Family: 1 (IPv4) 00:22:05.938 Subsystem Type: 2 (NVM Subsystem) 00:22:05.938 Entry Flags: 00:22:05.938 Duplicate Returned Information: 0 00:22:05.938 Explicit Persistent Connection Support for Discovery: 0 00:22:05.938 Transport Requirements: 00:22:05.938 Secure Channel: Not Required 00:22:05.938 Port ID: 0 (0x0000) 00:22:05.938 Controller ID: 65535 (0xffff) 00:22:05.938 Admin Max SQ Size: 128 00:22:05.938 Transport Service Identifier: 4420 00:22:05.938 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:05.938 Transport Address: 10.0.0.2 [2024-10-30 12:33:38.602496] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:05.938 [2024-10-30 12:33:38.602518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061100) on tqpair=0x1fff690 00:22:05.938 [2024-10-30 12:33:38.602531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.938 [2024-10-30 12:33:38.602540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061280) on tqpair=0x1fff690 00:22:05.938 [2024-10-30 12:33:38.602547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.938 [2024-10-30 12:33:38.602555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061400) on tqpair=0x1fff690 00:22:05.938 [2024-10-30 12:33:38.602562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.938 [2024-10-30 12:33:38.602570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.938 [2024-10-30 12:33:38.602578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.938 [2024-10-30 12:33:38.602590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.602598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.602604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.938 [2024-10-30 12:33:38.602615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.938 [2024-10-30 12:33:38.602639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.938 [2024-10-30 12:33:38.602743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.938 [2024-10-30 12:33:38.602755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.938 [2024-10-30 12:33:38.602762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.602768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.938 [2024-10-30 12:33:38.602784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.602793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.602799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.938 [2024-10-30 12:33:38.602810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.938 [2024-10-30 12:33:38.602836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.938 [2024-10-30 12:33:38.602927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.938 [2024-10-30 12:33:38.602940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.938 [2024-10-30 12:33:38.602947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.602954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.938 [2024-10-30 12:33:38.602967] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:05.938 [2024-10-30 12:33:38.602975] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:05.938 [2024-10-30 12:33:38.602991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.603000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.603006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.938 [2024-10-30 12:33:38.603016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.938 [2024-10-30 12:33:38.603037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.938 [2024-10-30 12:33:38.603109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.938 [2024-10-30 12:33:38.603121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.938 [2024-10-30 12:33:38.603127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.603134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.938 [2024-10-30 12:33:38.603149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.603158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.603165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.938 [2024-10-30 12:33:38.603175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.938 [2024-10-30 12:33:38.603195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.938 [2024-10-30 12:33:38.603270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.938 [2024-10-30 12:33:38.603284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.938 [2024-10-30 12:33:38.603291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.603297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.938 [2024-10-30 12:33:38.603313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.938 [2024-10-30 12:33:38.603322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.603338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.603359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.603436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.603449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.603456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.603478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.603503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.603523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.603596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.603609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.603620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.603642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.603668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.603688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.603758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.603771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.603778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.603800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.603825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.603845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.603918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.603931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.603937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.603959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.603974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.603985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.604005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.604073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.604085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.604092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.604113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.604138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.604158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.604226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.604238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.604244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.604281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.604307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.604327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.604399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.604410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.604417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.604438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.604464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.604484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.604556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.604569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.604576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.604598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.604624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.604644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.604716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.604729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.604736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.604757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.604783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.604803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.604875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.604888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.604895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.604921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.604938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.939 [2024-10-30 12:33:38.604948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.939 [2024-10-30 12:33:38.604969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.939 [2024-10-30 12:33:38.605038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.939 [2024-10-30 12:33:38.605050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.939 [2024-10-30 12:33:38.605056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.605063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.939 [2024-10-30 12:33:38.605078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.939 [2024-10-30 12:33:38.605086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.940 [2024-10-30 12:33:38.605092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.940 [2024-10-30 12:33:38.605102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.940 [2024-10-30 12:33:38.605122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.940 [2024-10-30 12:33:38.605194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.940 [2024-10-30 12:33:38.605207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.940 [2024-10-30 12:33:38.605214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.940 [2024-10-30 12:33:38.605220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.940 [2024-10-30 12:33:38.605236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:05.940 [2024-10-30 12:33:38.605245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:05.940 [2024-10-30 12:33:38.605251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fff690) 00:22:05.940 [2024-10-30 12:33:38.609285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.940 [2024-10-30 12:33:38.609311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2061580, cid 3, qid 0 00:22:05.940 [2024-10-30 12:33:38.609422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:05.940 [2024-10-30 12:33:38.609435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:05.940 [2024-10-30 12:33:38.609441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:05.940 [2024-10-30 12:33:38.609448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2061580) on tqpair=0x1fff690 00:22:05.940 [2024-10-30 12:33:38.609461] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:22:06.201 00:22:06.201 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:06.201 [2024-10-30 12:33:38.645549] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:22:06.201 [2024-10-30 12:33:38.645593] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672860 ] 00:22:06.201 [2024-10-30 12:33:38.692905] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:06.201 [2024-10-30 12:33:38.692963] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:06.201 [2024-10-30 12:33:38.692974] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:06.201 [2024-10-30 12:33:38.692989] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:06.201 [2024-10-30 12:33:38.693002] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:06.201 [2024-10-30 12:33:38.696588] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:06.201 [2024-10-30 12:33:38.696627] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b63690 0 00:22:06.201 [2024-10-30 12:33:38.703265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:06.201 [2024-10-30 12:33:38.703285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:06.201 [2024-10-30 12:33:38.703293] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:06.201 [2024-10-30 12:33:38.703299] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:06.201 [2024-10-30 12:33:38.703332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.201 [2024-10-30 12:33:38.703344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.201 [2024-10-30 12:33:38.703351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b63690) 00:22:06.201 [2024-10-30 12:33:38.703364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:06.201 [2024-10-30 12:33:38.703391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5100, cid 0, qid 0 00:22:06.201 [2024-10-30 12:33:38.711269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.201 [2024-10-30 12:33:38.711287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.201 [2024-10-30 12:33:38.711294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.201 [2024-10-30 12:33:38.711301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5100) on tqpair=0x1b63690 00:22:06.201 [2024-10-30 12:33:38.711315] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:06.201 [2024-10-30 12:33:38.711325] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:06.201 [2024-10-30 12:33:38.711335] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:06.201 [2024-10-30 12:33:38.711353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.201 [2024-10-30 12:33:38.711361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.201 [2024-10-30 12:33:38.711368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b63690) 00:22:06.201 [2024-10-30 12:33:38.711379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.201 [2024-10-30 12:33:38.711402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5100, cid 0, qid 0 00:22:06.201 [2024-10-30 12:33:38.711531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.201 [2024-10-30 12:33:38.711546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.201 [2024-10-30 12:33:38.711552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.201 [2024-10-30 12:33:38.711559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5100) on tqpair=0x1b63690 00:22:06.201 [2024-10-30 12:33:38.711568] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:06.201 [2024-10-30 12:33:38.711581] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:06.201 [2024-10-30 12:33:38.711593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.201 [2024-10-30 12:33:38.711605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.201 [2024-10-30 12:33:38.711612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b63690) 00:22:06.201 [2024-10-30 12:33:38.711623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.201 [2024-10-30 12:33:38.711644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5100, cid 0, qid 0 00:22:06.201 [2024-10-30 12:33:38.711732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.201 [2024-10-30 12:33:38.711744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.201 [2024-10-30 12:33:38.711751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.201 [2024-10-30 12:33:38.711758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5100) on tqpair=0x1b63690 00:22:06.201 [2024-10-30 12:33:38.711766] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:06.201 [2024-10-30 12:33:38.711779] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:06.201 [2024-10-30 12:33:38.711791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.711798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.711804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.711815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.202 [2024-10-30 12:33:38.711835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5100, cid 0, qid 0 00:22:06.202 [2024-10-30 12:33:38.711920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.202 [2024-10-30 12:33:38.711932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.202 [2024-10-30 12:33:38.711939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.711945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5100) on tqpair=0x1b63690 00:22:06.202 [2024-10-30 12:33:38.711954] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:06.202 [2024-10-30 12:33:38.711974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.711984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.711991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.712001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.202 [2024-10-30 12:33:38.712022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5100, cid 0, qid 0 00:22:06.202 [2024-10-30 12:33:38.712102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.202 [2024-10-30 12:33:38.712115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.202 [2024-10-30 12:33:38.712122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5100) on tqpair=0x1b63690 00:22:06.202 [2024-10-30 12:33:38.712136] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:06.202 [2024-10-30 12:33:38.712144] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:06.202 [2024-10-30 12:33:38.712157] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:06.202 [2024-10-30 12:33:38.712267] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:06.202 [2024-10-30 12:33:38.712277] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:06.202 [2024-10-30 12:33:38.712294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.712319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.202 [2024-10-30 12:33:38.712340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5100, cid 0, qid 0 00:22:06.202 [2024-10-30 12:33:38.712455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.202 [2024-10-30 12:33:38.712467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.202 [2024-10-30 12:33:38.712474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5100) on tqpair=0x1b63690 00:22:06.202 [2024-10-30 12:33:38.712489] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:06.202 [2024-10-30 12:33:38.712505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.712530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.202 [2024-10-30 12:33:38.712550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5100, cid 0, qid 0 00:22:06.202 [2024-10-30 12:33:38.712631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.202 [2024-10-30 12:33:38.712645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.202 [2024-10-30 12:33:38.712652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5100) on tqpair=0x1b63690 00:22:06.202 [2024-10-30 12:33:38.712666] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:06.202 [2024-10-30 12:33:38.712675] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:06.202 [2024-10-30 12:33:38.712688] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:06.202 [2024-10-30 12:33:38.712702] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:06.202 [2024-10-30 12:33:38.712716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.712735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.202 [2024-10-30 12:33:38.712755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5100, cid 0, qid 0 00:22:06.202 [2024-10-30 12:33:38.712889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.202 [2024-10-30 12:33:38.712902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.202 [2024-10-30 12:33:38.712908] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712915] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b63690): datao=0, datal=4096, cccid=0 00:22:06.202 [2024-10-30 12:33:38.712922] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc5100) on tqpair(0x1b63690): expected_datao=0, payload_size=4096 00:22:06.202 [2024-10-30 12:33:38.712933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712944] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712952] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.202 [2024-10-30 12:33:38.712973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.202 [2024-10-30 12:33:38.712980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.712986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5100) on tqpair=0x1b63690 00:22:06.202 [2024-10-30 12:33:38.712997] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:06.202 [2024-10-30 12:33:38.713006] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:06.202 [2024-10-30 12:33:38.713013] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:06.202 [2024-10-30 12:33:38.713020] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:06.202 [2024-10-30 12:33:38.713027] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:06.202 [2024-10-30 12:33:38.713035] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:06.202 [2024-10-30 12:33:38.713049] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:06.202 [2024-10-30 12:33:38.713061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.713085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:06.202 [2024-10-30 12:33:38.713106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5100, cid 0, qid 0 00:22:06.202 [2024-10-30 12:33:38.713191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.202 [2024-10-30 12:33:38.713205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.202 [2024-10-30 12:33:38.713212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5100) on tqpair=0x1b63690 00:22:06.202 [2024-10-30 12:33:38.713233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.713267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.202 [2024-10-30 12:33:38.713279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.713301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.202 [2024-10-30 12:33:38.713310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.713331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.202 [2024-10-30 12:33:38.713347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.713370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.202 [2024-10-30 12:33:38.713378] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:06.202 [2024-10-30 12:33:38.713392] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:06.202 [2024-10-30 12:33:38.713404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.202 [2024-10-30 12:33:38.713411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b63690) 00:22:06.202 [2024-10-30 12:33:38.713421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.202 [2024-10-30 12:33:38.713443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5100, cid 0, qid 0 00:22:06.202 [2024-10-30 12:33:38.713454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5280, cid 1, qid 0 00:22:06.202 [2024-10-30 12:33:38.713461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5400, cid 2, qid 0 00:22:06.203 [2024-10-30 12:33:38.713469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5580, cid 3, qid 0 00:22:06.203 [2024-10-30 12:33:38.713476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5700, cid 4, qid 0 00:22:06.203 [2024-10-30 12:33:38.713621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.203 [2024-10-30 12:33:38.713635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.203 [2024-10-30 12:33:38.713642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.713648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5700) on tqpair=0x1b63690 00:22:06.203 [2024-10-30 12:33:38.713661] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:06.203 [2024-10-30 12:33:38.713671] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.713684] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.713695] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.713706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.713713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.713719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b63690) 00:22:06.203 [2024-10-30 12:33:38.713730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:06.203 [2024-10-30 12:33:38.713750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5700, cid 4, qid 0 00:22:06.203 [2024-10-30 12:33:38.713933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.203 [2024-10-30 12:33:38.713947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.203 [2024-10-30 12:33:38.713954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.713961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5700) on tqpair=0x1b63690 00:22:06.203 [2024-10-30 12:33:38.714030] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.714053] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.714069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.714076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b63690) 00:22:06.203 [2024-10-30 12:33:38.714087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.203 [2024-10-30 12:33:38.714108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5700, cid 4, qid 0 00:22:06.203 [2024-10-30 12:33:38.714215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.203 [2024-10-30 12:33:38.714230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.203 [2024-10-30 12:33:38.714237] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.714243] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b63690): datao=0, datal=4096, cccid=4 00:22:06.203 [2024-10-30 12:33:38.714251] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc5700) on tqpair(0x1b63690): expected_datao=0, payload_size=4096 00:22:06.203 [2024-10-30 12:33:38.714266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.714278] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.714285] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.714297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.203 [2024-10-30 12:33:38.714306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.203 [2024-10-30 12:33:38.714313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.714320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5700) on tqpair=0x1b63690 00:22:06.203 [2024-10-30 12:33:38.714336] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:06.203 [2024-10-30 12:33:38.714357] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.714375] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.714389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.714397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b63690) 00:22:06.203 [2024-10-30 12:33:38.714407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.203 [2024-10-30 12:33:38.714428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5700, cid 4, qid 0 00:22:06.203 [2024-10-30 12:33:38.718269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.203 [2024-10-30 12:33:38.718295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.203 [2024-10-30 12:33:38.718303] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718309] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b63690): datao=0, datal=4096, cccid=4 00:22:06.203 [2024-10-30 12:33:38.718316] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc5700) on tqpair(0x1b63690): expected_datao=0, payload_size=4096 00:22:06.203 [2024-10-30 12:33:38.718323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718333] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718340] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.203 [2024-10-30 12:33:38.718357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.203 [2024-10-30 12:33:38.718367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5700) on tqpair=0x1b63690 00:22:06.203 [2024-10-30 12:33:38.718395] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.718413] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.718428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b63690) 00:22:06.203 [2024-10-30 12:33:38.718445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.203 [2024-10-30 12:33:38.718467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5700, cid 4, qid 0 00:22:06.203 [2024-10-30 12:33:38.718603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.203 [2024-10-30 12:33:38.718618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.203 [2024-10-30 12:33:38.718624] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718631] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b63690): datao=0, datal=4096, cccid=4 00:22:06.203 [2024-10-30 12:33:38.718638] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc5700) on tqpair(0x1b63690): expected_datao=0, payload_size=4096 00:22:06.203 [2024-10-30 12:33:38.718645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718655] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718663] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.203 [2024-10-30 12:33:38.718684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.203 [2024-10-30 12:33:38.718690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5700) on tqpair=0x1b63690 00:22:06.203 [2024-10-30 12:33:38.718710] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.718725] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.718740] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.718751] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.718760] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.718769] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.718777] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:06.203 [2024-10-30 12:33:38.718785] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:06.203 [2024-10-30 12:33:38.718793] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:06.203 [2024-10-30 12:33:38.718813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b63690) 00:22:06.203 [2024-10-30 12:33:38.718835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.203 [2024-10-30 12:33:38.718847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.718860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b63690) 00:22:06.203 [2024-10-30 12:33:38.718869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.203 [2024-10-30 12:33:38.718912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5700, cid 4, qid 0 00:22:06.203 [2024-10-30 12:33:38.718923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5880, cid 5, qid 0 00:22:06.203 [2024-10-30 12:33:38.719088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.203 [2024-10-30 12:33:38.719102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.203 [2024-10-30 12:33:38.719109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.203 [2024-10-30 12:33:38.719116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5700) on tqpair=0x1b63690 00:22:06.203 [2024-10-30 12:33:38.719127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.203 [2024-10-30 12:33:38.719136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.203 [2024-10-30 12:33:38.719142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5880) on tqpair=0x1b63690 00:22:06.204 [2024-10-30 12:33:38.719164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b63690) 00:22:06.204 [2024-10-30 12:33:38.719183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.204 [2024-10-30 12:33:38.719203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5880, cid 5, qid 0 00:22:06.204 [2024-10-30 12:33:38.719298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.204 [2024-10-30 12:33:38.719313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.204 [2024-10-30 12:33:38.719319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5880) on tqpair=0x1b63690 00:22:06.204 [2024-10-30 12:33:38.719342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b63690) 00:22:06.204 [2024-10-30 12:33:38.719361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.204 [2024-10-30 12:33:38.719381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5880, cid 5, qid 0 00:22:06.204 [2024-10-30 12:33:38.719462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.204 [2024-10-30 12:33:38.719476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.204 [2024-10-30 12:33:38.719483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5880) on tqpair=0x1b63690 00:22:06.204 [2024-10-30 12:33:38.719505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b63690) 00:22:06.204 [2024-10-30 12:33:38.719523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.204 [2024-10-30 12:33:38.719543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5880, cid 5, qid 0 00:22:06.204 [2024-10-30 12:33:38.719630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.204 [2024-10-30 12:33:38.719645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.204 [2024-10-30 12:33:38.719652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5880) on tqpair=0x1b63690 00:22:06.204 [2024-10-30 12:33:38.719682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b63690) 00:22:06.204 [2024-10-30 12:33:38.719703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.204 [2024-10-30 12:33:38.719716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b63690) 00:22:06.204 [2024-10-30 12:33:38.719732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.204 [2024-10-30 12:33:38.719744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b63690) 00:22:06.204 [2024-10-30 12:33:38.719760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.204 [2024-10-30 12:33:38.719775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.719784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b63690) 00:22:06.204 [2024-10-30 12:33:38.719794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.204 [2024-10-30 12:33:38.719815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5880, cid 5, qid 0 00:22:06.204 [2024-10-30 12:33:38.719841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5700, cid 4, qid 0 00:22:06.204 [2024-10-30 12:33:38.719848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5a00, cid 6, qid 0 00:22:06.204 [2024-10-30 12:33:38.719855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5b80, cid 7, qid 0 00:22:06.204 [2024-10-30 12:33:38.720125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.204 [2024-10-30 12:33:38.720141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.204 [2024-10-30 12:33:38.720147] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720154] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b63690): datao=0, datal=8192, cccid=5 00:22:06.204 [2024-10-30 12:33:38.720161] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc5880) on tqpair(0x1b63690): expected_datao=0, payload_size=8192 00:22:06.204 [2024-10-30 12:33:38.720168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720191] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720200] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.204 [2024-10-30 12:33:38.720222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.204 [2024-10-30 12:33:38.720228] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720234] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b63690): datao=0, datal=512, cccid=4 00:22:06.204 [2024-10-30 12:33:38.720242] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc5700) on tqpair(0x1b63690): expected_datao=0, payload_size=512 00:22:06.204 [2024-10-30 12:33:38.720249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720271] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720281] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.204 [2024-10-30 12:33:38.720299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.204 [2024-10-30 12:33:38.720305] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720311] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b63690): datao=0, datal=512, cccid=6 00:22:06.204 [2024-10-30 12:33:38.720318] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc5a00) on tqpair(0x1b63690): expected_datao=0, payload_size=512 00:22:06.204 [2024-10-30 12:33:38.720325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720334] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720341] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:06.204 [2024-10-30 12:33:38.720358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:06.204 [2024-10-30 12:33:38.720364] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720370] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b63690): datao=0, datal=4096, cccid=7 00:22:06.204 [2024-10-30 12:33:38.720377] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc5b80) on tqpair(0x1b63690): expected_datao=0, payload_size=4096 00:22:06.204 [2024-10-30 12:33:38.720384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720393] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720400] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.204 [2024-10-30 12:33:38.720421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.204 [2024-10-30 12:33:38.720428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5880) on tqpair=0x1b63690 00:22:06.204 [2024-10-30 12:33:38.720455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.204 [2024-10-30 12:33:38.720467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.204 [2024-10-30 12:33:38.720473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5700) on tqpair=0x1b63690 00:22:06.204 [2024-10-30 12:33:38.720495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.204 [2024-10-30 12:33:38.720520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.204 [2024-10-30 12:33:38.720527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5a00) on tqpair=0x1b63690 00:22:06.204 [2024-10-30 12:33:38.720543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.204 [2024-10-30 12:33:38.720553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.204 [2024-10-30 12:33:38.720559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.204 [2024-10-30 12:33:38.720565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5b80) on tqpair=0x1b63690 00:22:06.204 ===================================================== 00:22:06.204 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:06.204 ===================================================== 00:22:06.204 Controller Capabilities/Features 00:22:06.204 ================================ 00:22:06.204 Vendor ID: 8086 00:22:06.204 Subsystem Vendor ID: 8086 00:22:06.204 Serial Number: SPDK00000000000001 00:22:06.204 Model Number: SPDK bdev Controller 00:22:06.204 Firmware Version: 25.01 00:22:06.204 Recommended Arb Burst: 6 00:22:06.204 IEEE OUI Identifier: e4 d2 5c 00:22:06.204 Multi-path I/O 00:22:06.204 May have multiple subsystem ports: Yes 00:22:06.204 May have multiple controllers: Yes 00:22:06.204 Associated with SR-IOV VF: No 00:22:06.204 Max Data Transfer Size: 131072 00:22:06.204 Max Number of Namespaces: 32 00:22:06.204 Max Number of I/O Queues: 127 00:22:06.204 NVMe Specification Version (VS): 1.3 00:22:06.204 NVMe Specification Version (Identify): 1.3 00:22:06.204 Maximum Queue Entries: 128 00:22:06.204 Contiguous Queues Required: Yes 00:22:06.204 Arbitration Mechanisms Supported 00:22:06.204 Weighted Round Robin: Not Supported 00:22:06.204 Vendor Specific: Not Supported 00:22:06.204 Reset Timeout: 15000 ms 00:22:06.204 Doorbell Stride: 4 bytes 00:22:06.204 NVM Subsystem Reset: Not Supported 00:22:06.204 Command Sets Supported 00:22:06.204 NVM Command Set: Supported 00:22:06.204 Boot Partition: Not Supported 00:22:06.204 Memory Page Size Minimum: 4096 bytes 00:22:06.205 Memory Page Size Maximum: 4096 bytes 00:22:06.205 Persistent Memory Region: Not Supported 00:22:06.205 Optional Asynchronous Events Supported 00:22:06.205 Namespace Attribute Notices: Supported 00:22:06.205 Firmware Activation Notices: Not Supported 00:22:06.205 ANA Change Notices: Not Supported 00:22:06.205 PLE Aggregate Log Change Notices: Not Supported 00:22:06.205 LBA Status Info Alert Notices: Not Supported 00:22:06.205 EGE Aggregate Log Change Notices: Not Supported 00:22:06.205 Normal NVM Subsystem Shutdown event: Not Supported 00:22:06.205 Zone Descriptor Change Notices: Not Supported 00:22:06.205 Discovery Log Change Notices: Not Supported 00:22:06.205 Controller Attributes 00:22:06.205 128-bit Host Identifier: Supported 00:22:06.205 Non-Operational Permissive Mode: Not Supported 00:22:06.205 NVM Sets: Not Supported 00:22:06.205 Read Recovery Levels: Not Supported 00:22:06.205 Endurance Groups: Not Supported 00:22:06.205 Predictable Latency Mode: Not Supported 00:22:06.205 Traffic Based Keep ALive: Not Supported 00:22:06.205 Namespace Granularity: Not Supported 00:22:06.205 SQ Associations: Not Supported 00:22:06.205 UUID List: Not Supported 00:22:06.205 Multi-Domain Subsystem: Not Supported 00:22:06.205 Fixed Capacity Management: Not Supported 00:22:06.205 Variable Capacity Management: Not Supported 00:22:06.205 Delete Endurance Group: Not Supported 00:22:06.205 Delete NVM Set: Not Supported 00:22:06.205 Extended LBA Formats Supported: Not Supported 00:22:06.205 Flexible Data Placement Supported: Not Supported 00:22:06.205 00:22:06.205 Controller Memory Buffer Support 00:22:06.205 ================================ 00:22:06.205 Supported: No 00:22:06.205 00:22:06.205 Persistent Memory Region Support 00:22:06.205 ================================ 00:22:06.205 Supported: No 00:22:06.205 00:22:06.205 Admin Command Set Attributes 00:22:06.205 ============================ 00:22:06.205 Security Send/Receive: Not Supported 00:22:06.205 Format NVM: Not Supported 00:22:06.205 Firmware Activate/Download: Not Supported 00:22:06.205 Namespace Management: Not Supported 00:22:06.205 Device Self-Test: Not Supported 00:22:06.205 Directives: Not Supported 00:22:06.205 NVMe-MI: Not Supported 00:22:06.205 Virtualization Management: Not Supported 00:22:06.205 Doorbell Buffer Config: Not Supported 00:22:06.205 Get LBA Status Capability: Not Supported 00:22:06.205 Command & Feature Lockdown Capability: Not Supported 00:22:06.205 Abort Command Limit: 4 00:22:06.205 Async Event Request Limit: 4 00:22:06.205 Number of Firmware Slots: N/A 00:22:06.205 Firmware Slot 1 Read-Only: N/A 00:22:06.205 Firmware Activation Without Reset: N/A 00:22:06.205 Multiple Update Detection Support: N/A 00:22:06.205 Firmware Update Granularity: No Information Provided 00:22:06.205 Per-Namespace SMART Log: No 00:22:06.205 Asymmetric Namespace Access Log Page: Not Supported 00:22:06.205 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:06.205 Command Effects Log Page: Supported 00:22:06.205 Get Log Page Extended Data: Supported 00:22:06.205 Telemetry Log Pages: Not Supported 00:22:06.205 Persistent Event Log Pages: Not Supported 00:22:06.205 Supported Log Pages Log Page: May Support 00:22:06.205 Commands Supported & Effects Log Page: Not Supported 00:22:06.205 Feature Identifiers & Effects Log Page:May Support 00:22:06.205 NVMe-MI Commands & Effects Log Page: May Support 00:22:06.205 Data Area 4 for Telemetry Log: Not Supported 00:22:06.205 Error Log Page Entries Supported: 128 00:22:06.205 Keep Alive: Supported 00:22:06.205 Keep Alive Granularity: 10000 ms 00:22:06.205 00:22:06.205 NVM Command Set Attributes 00:22:06.205 ========================== 00:22:06.205 Submission Queue Entry Size 00:22:06.205 Max: 64 00:22:06.205 Min: 64 00:22:06.205 Completion Queue Entry Size 00:22:06.205 Max: 16 00:22:06.205 Min: 16 00:22:06.205 Number of Namespaces: 32 00:22:06.205 Compare Command: Supported 00:22:06.205 Write Uncorrectable Command: Not Supported 00:22:06.205 Dataset Management Command: Supported 00:22:06.205 Write Zeroes Command: Supported 00:22:06.205 Set Features Save Field: Not Supported 00:22:06.205 Reservations: Supported 00:22:06.205 Timestamp: Not Supported 00:22:06.205 Copy: Supported 00:22:06.205 Volatile Write Cache: Present 00:22:06.205 Atomic Write Unit (Normal): 1 00:22:06.205 Atomic Write Unit (PFail): 1 00:22:06.205 Atomic Compare & Write Unit: 1 00:22:06.205 Fused Compare & Write: Supported 00:22:06.205 Scatter-Gather List 00:22:06.205 SGL Command Set: Supported 00:22:06.205 SGL Keyed: Supported 00:22:06.205 SGL Bit Bucket Descriptor: Not Supported 00:22:06.205 SGL Metadata Pointer: Not Supported 00:22:06.205 Oversized SGL: Not Supported 00:22:06.205 SGL Metadata Address: Not Supported 00:22:06.205 SGL Offset: Supported 00:22:06.205 Transport SGL Data Block: Not Supported 00:22:06.205 Replay Protected Memory Block: Not Supported 00:22:06.205 00:22:06.205 Firmware Slot Information 00:22:06.205 ========================= 00:22:06.205 Active slot: 1 00:22:06.205 Slot 1 Firmware Revision: 25.01 00:22:06.205 00:22:06.205 00:22:06.205 Commands Supported and Effects 00:22:06.205 ============================== 00:22:06.205 Admin Commands 00:22:06.205 -------------- 00:22:06.205 Get Log Page (02h): Supported 00:22:06.205 Identify (06h): Supported 00:22:06.205 Abort (08h): Supported 00:22:06.205 Set Features (09h): Supported 00:22:06.205 Get Features (0Ah): Supported 00:22:06.205 Asynchronous Event Request (0Ch): Supported 00:22:06.205 Keep Alive (18h): Supported 00:22:06.205 I/O Commands 00:22:06.205 ------------ 00:22:06.205 Flush (00h): Supported LBA-Change 00:22:06.205 Write (01h): Supported LBA-Change 00:22:06.205 Read (02h): Supported 00:22:06.205 Compare (05h): Supported 00:22:06.205 Write Zeroes (08h): Supported LBA-Change 00:22:06.205 Dataset Management (09h): Supported LBA-Change 00:22:06.205 Copy (19h): Supported LBA-Change 00:22:06.205 00:22:06.205 Error Log 00:22:06.205 ========= 00:22:06.205 00:22:06.205 Arbitration 00:22:06.205 =========== 00:22:06.205 Arbitration Burst: 1 00:22:06.205 00:22:06.205 Power Management 00:22:06.205 ================ 00:22:06.205 Number of Power States: 1 00:22:06.205 Current Power State: Power State #0 00:22:06.205 Power State #0: 00:22:06.205 Max Power: 0.00 W 00:22:06.205 Non-Operational State: Operational 00:22:06.205 Entry Latency: Not Reported 00:22:06.205 Exit Latency: Not Reported 00:22:06.205 Relative Read Throughput: 0 00:22:06.205 Relative Read Latency: 0 00:22:06.205 Relative Write Throughput: 0 00:22:06.205 Relative Write Latency: 0 00:22:06.205 Idle Power: Not Reported 00:22:06.205 Active Power: Not Reported 00:22:06.205 Non-Operational Permissive Mode: Not Supported 00:22:06.205 00:22:06.205 Health Information 00:22:06.205 ================== 00:22:06.205 Critical Warnings: 00:22:06.205 Available Spare Space: OK 00:22:06.205 Temperature: OK 00:22:06.205 Device Reliability: OK 00:22:06.205 Read Only: No 00:22:06.205 Volatile Memory Backup: OK 00:22:06.205 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:06.205 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:06.205 Available Spare: 0% 00:22:06.205 Available Spare Threshold: 0% 00:22:06.205 Life Percentage Used:[2024-10-30 12:33:38.720690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.205 [2024-10-30 12:33:38.720701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b63690) 00:22:06.205 [2024-10-30 12:33:38.720711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.205 [2024-10-30 12:33:38.720732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5b80, cid 7, qid 0 00:22:06.205 [2024-10-30 12:33:38.720930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.205 [2024-10-30 12:33:38.720947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.205 [2024-10-30 12:33:38.720954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.205 [2024-10-30 12:33:38.720961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5b80) on tqpair=0x1b63690 00:22:06.205 [2024-10-30 12:33:38.721010] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:06.205 [2024-10-30 12:33:38.721030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5100) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.721040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.206 [2024-10-30 12:33:38.721049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5280) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.721057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.206 [2024-10-30 12:33:38.721065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5400) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.721072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.206 [2024-10-30 12:33:38.721080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5580) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.721087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.206 [2024-10-30 12:33:38.721099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b63690) 00:22:06.206 [2024-10-30 12:33:38.721138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.206 [2024-10-30 12:33:38.721159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5580, cid 3, qid 0 00:22:06.206 [2024-10-30 12:33:38.721305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.206 [2024-10-30 12:33:38.721320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.206 [2024-10-30 12:33:38.721327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5580) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.721345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b63690) 00:22:06.206 [2024-10-30 12:33:38.721369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.206 [2024-10-30 12:33:38.721396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5580, cid 3, qid 0 00:22:06.206 [2024-10-30 12:33:38.721489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.206 [2024-10-30 12:33:38.721503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.206 [2024-10-30 12:33:38.721510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5580) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.721524] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:06.206 [2024-10-30 12:33:38.721531] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:06.206 [2024-10-30 12:33:38.721547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b63690) 00:22:06.206 [2024-10-30 12:33:38.721577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.206 [2024-10-30 12:33:38.721597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5580, cid 3, qid 0 00:22:06.206 [2024-10-30 12:33:38.721680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.206 [2024-10-30 12:33:38.721694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.206 [2024-10-30 12:33:38.721701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5580) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.721723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b63690) 00:22:06.206 [2024-10-30 12:33:38.721749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.206 [2024-10-30 12:33:38.721769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5580, cid 3, qid 0 00:22:06.206 [2024-10-30 12:33:38.721855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.206 [2024-10-30 12:33:38.721868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.206 [2024-10-30 12:33:38.721875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5580) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.721897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.721912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b63690) 00:22:06.206 [2024-10-30 12:33:38.721922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.206 [2024-10-30 12:33:38.721942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5580, cid 3, qid 0 00:22:06.206 [2024-10-30 12:33:38.722025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.206 [2024-10-30 12:33:38.722039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.206 [2024-10-30 12:33:38.722045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.722052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5580) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.722068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.722076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.722083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b63690) 00:22:06.206 [2024-10-30 12:33:38.722093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.206 [2024-10-30 12:33:38.722113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5580, cid 3, qid 0 00:22:06.206 [2024-10-30 12:33:38.729267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.206 [2024-10-30 12:33:38.729284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.206 [2024-10-30 12:33:38.729291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.729298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5580) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.729316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.729325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.729331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b63690) 00:22:06.206 [2024-10-30 12:33:38.729346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.206 [2024-10-30 12:33:38.729369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc5580, cid 3, qid 0 00:22:06.206 [2024-10-30 12:33:38.729487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:06.206 [2024-10-30 12:33:38.729501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:06.206 [2024-10-30 12:33:38.729508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:06.206 [2024-10-30 12:33:38.729515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc5580) on tqpair=0x1b63690 00:22:06.206 [2024-10-30 12:33:38.729528] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:22:06.206 0% 00:22:06.206 Data Units Read: 0 00:22:06.206 Data Units Written: 0 00:22:06.206 Host Read Commands: 0 00:22:06.206 Host Write Commands: 0 00:22:06.206 Controller Busy Time: 0 minutes 00:22:06.206 Power Cycles: 0 00:22:06.206 Power On Hours: 0 hours 00:22:06.206 Unsafe Shutdowns: 0 00:22:06.206 Unrecoverable Media Errors: 0 00:22:06.206 Lifetime Error Log Entries: 0 00:22:06.206 Warning Temperature Time: 0 minutes 00:22:06.206 Critical Temperature Time: 0 minutes 00:22:06.206 00:22:06.206 Number of Queues 00:22:06.206 ================ 00:22:06.206 Number of I/O Submission Queues: 127 00:22:06.206 Number of I/O Completion Queues: 127 00:22:06.206 00:22:06.206 Active Namespaces 00:22:06.206 ================= 00:22:06.206 Namespace ID:1 00:22:06.206 Error Recovery Timeout: Unlimited 00:22:06.206 Command Set Identifier: NVM (00h) 00:22:06.206 Deallocate: Supported 00:22:06.206 Deallocated/Unwritten Error: Not Supported 00:22:06.206 Deallocated Read Value: Unknown 00:22:06.206 Deallocate in Write Zeroes: Not Supported 00:22:06.206 Deallocated Guard Field: 0xFFFF 00:22:06.206 Flush: Supported 00:22:06.206 Reservation: Supported 00:22:06.206 Namespace Sharing Capabilities: Multiple Controllers 00:22:06.206 Size (in LBAs): 131072 (0GiB) 00:22:06.206 Capacity (in LBAs): 131072 (0GiB) 00:22:06.206 Utilization (in LBAs): 131072 (0GiB) 00:22:06.206 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:06.206 EUI64: ABCDEF0123456789 00:22:06.206 UUID: 7fd7cf17-0af0-43fe-a298-63e793406842 00:22:06.206 Thin Provisioning: Not Supported 00:22:06.206 Per-NS Atomic Units: Yes 00:22:06.206 Atomic Boundary Size (Normal): 0 00:22:06.206 Atomic Boundary Size (PFail): 0 00:22:06.206 Atomic Boundary Offset: 0 00:22:06.206 Maximum Single Source Range Length: 65535 00:22:06.206 Maximum Copy Length: 65535 00:22:06.206 Maximum Source Range Count: 1 00:22:06.206 NGUID/EUI64 Never Reused: No 00:22:06.206 Namespace Write Protected: No 00:22:06.206 Number of LBA Formats: 1 00:22:06.206 Current LBA Format: LBA Format #00 00:22:06.206 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:06.206 00:22:06.206 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:06.207 rmmod nvme_tcp 00:22:06.207 rmmod nvme_fabrics 00:22:06.207 rmmod nvme_keyring 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 672714 ']' 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 672714 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 672714 ']' 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 672714 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 672714 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 672714' 00:22:06.207 killing process with pid 672714 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 672714 00:22:06.207 12:33:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 672714 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.466 12:33:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.007 00:22:09.007 real 0m5.701s 00:22:09.007 user 0m4.665s 00:22:09.007 sys 0m2.009s 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.007 ************************************ 00:22:09.007 END TEST nvmf_identify 00:22:09.007 ************************************ 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.007 ************************************ 00:22:09.007 START TEST nvmf_perf 00:22:09.007 ************************************ 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:09.007 * Looking for test storage... 00:22:09.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:09.007 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:09.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.008 --rc genhtml_branch_coverage=1 00:22:09.008 --rc genhtml_function_coverage=1 00:22:09.008 --rc genhtml_legend=1 00:22:09.008 --rc geninfo_all_blocks=1 00:22:09.008 --rc geninfo_unexecuted_blocks=1 00:22:09.008 00:22:09.008 ' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:09.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.008 --rc genhtml_branch_coverage=1 00:22:09.008 --rc genhtml_function_coverage=1 00:22:09.008 --rc genhtml_legend=1 00:22:09.008 --rc geninfo_all_blocks=1 00:22:09.008 --rc geninfo_unexecuted_blocks=1 00:22:09.008 00:22:09.008 ' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:09.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.008 --rc genhtml_branch_coverage=1 00:22:09.008 --rc genhtml_function_coverage=1 00:22:09.008 --rc genhtml_legend=1 00:22:09.008 --rc geninfo_all_blocks=1 00:22:09.008 --rc geninfo_unexecuted_blocks=1 00:22:09.008 00:22:09.008 ' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:09.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.008 --rc genhtml_branch_coverage=1 00:22:09.008 --rc genhtml_function_coverage=1 00:22:09.008 --rc genhtml_legend=1 00:22:09.008 --rc geninfo_all_blocks=1 00:22:09.008 --rc geninfo_unexecuted_blocks=1 00:22:09.008 00:22:09.008 ' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:09.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.008 12:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:10.911 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:10.911 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:10.911 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:10.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:22:10.911 00:22:10.911 --- 10.0.0.2 ping statistics --- 00:22:10.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.911 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:22:10.911 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:10.911 00:22:10.911 --- 10.0.0.1 ping statistics --- 00:22:10.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.912 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=674799 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 674799 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 674799 ']' 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:10.912 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:11.170 [2024-10-30 12:33:43.594548] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:22:11.170 [2024-10-30 12:33:43.594642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.170 [2024-10-30 12:33:43.666075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.170 [2024-10-30 12:33:43.724099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.170 [2024-10-30 12:33:43.724148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.170 [2024-10-30 12:33:43.724176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.170 [2024-10-30 12:33:43.724187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.170 [2024-10-30 12:33:43.724210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.170 [2024-10-30 12:33:43.725859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.170 [2024-10-30 12:33:43.725925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.170 [2024-10-30 12:33:43.725988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.170 [2024-10-30 12:33:43.725991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.170 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:11.170 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:22:11.170 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.170 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.170 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:11.429 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.429 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:11.429 12:33:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:14.710 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:14.710 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:14.710 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:14.710 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:14.968 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:14.968 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:14.968 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:14.968 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:14.968 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:15.226 [2024-10-30 12:33:47.867500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.226 12:33:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.792 12:33:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:15.792 12:33:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.792 12:33:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:15.792 12:33:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:16.050 12:33:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.308 [2024-10-30 12:33:48.967530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.308 12:33:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:16.875 12:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:16.875 12:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:16.875 12:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:16.875 12:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:17.810 Initializing NVMe Controllers 00:22:17.810 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:17.810 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:17.810 Initialization complete. Launching workers. 00:22:17.810 ======================================================== 00:22:17.810 Latency(us) 00:22:17.810 Device Information : IOPS MiB/s Average min max 00:22:17.810 PCIE (0000:88:00.0) NSID 1 from core 0: 84400.90 329.69 378.61 31.79 8256.00 00:22:17.810 ======================================================== 00:22:17.810 Total : 84400.90 329.69 378.61 31.79 8256.00 00:22:17.810 00:22:18.068 12:33:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:19.440 Initializing NVMe Controllers 00:22:19.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:19.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:19.440 Initialization complete. Launching workers. 00:22:19.440 ======================================================== 00:22:19.440 Latency(us) 00:22:19.440 Device Information : IOPS MiB/s Average min max 00:22:19.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 119.94 0.47 8346.40 138.21 45802.77 00:22:19.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.97 0.24 16394.32 7922.20 50868.11 00:22:19.440 ======================================================== 00:22:19.440 Total : 181.91 0.71 11088.00 138.21 50868.11 00:22:19.440 00:22:19.440 12:33:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:20.815 Initializing NVMe Controllers 00:22:20.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:20.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:20.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:20.815 Initialization complete. Launching workers. 00:22:20.815 ======================================================== 00:22:20.815 Latency(us) 00:22:20.815 Device Information : IOPS MiB/s Average min max 00:22:20.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8466.89 33.07 3801.12 720.72 45581.79 00:22:20.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3897.95 15.23 8252.20 6085.87 16104.30 00:22:20.815 ======================================================== 00:22:20.815 Total : 12364.84 48.30 5204.30 720.72 45581.79 00:22:20.815 00:22:21.073 12:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:21.073 12:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:21.073 12:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:23.601 Initializing NVMe Controllers 00:22:23.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:23.601 Controller IO queue size 128, less than required. 00:22:23.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:23.601 Controller IO queue size 128, less than required. 00:22:23.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:23.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:23.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:23.601 Initialization complete. Launching workers. 00:22:23.601 ======================================================== 00:22:23.601 Latency(us) 00:22:23.601 Device Information : IOPS MiB/s Average min max 00:22:23.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1679.30 419.83 77862.72 48033.94 136158.89 00:22:23.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.22 149.05 228080.84 102165.56 336312.23 00:22:23.601 ======================================================== 00:22:23.601 Total : 2275.52 568.88 117222.04 48033.94 336312.23 00:22:23.601 00:22:23.601 12:33:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:23.858 No valid NVMe controllers or AIO or URING devices found 00:22:23.858 Initializing NVMe Controllers 00:22:23.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:23.858 Controller IO queue size 128, less than required. 00:22:23.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:23.858 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:23.858 Controller IO queue size 128, less than required. 00:22:23.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:23.858 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:23.858 WARNING: Some requested NVMe devices were skipped 00:22:23.858 12:33:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:26.385 Initializing NVMe Controllers 00:22:26.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.385 Controller IO queue size 128, less than required. 00:22:26.385 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.385 Controller IO queue size 128, less than required. 00:22:26.385 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:26.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:26.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:26.385 Initialization complete. Launching workers. 00:22:26.385 00:22:26.385 ==================== 00:22:26.385 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:26.385 TCP transport: 00:22:26.385 polls: 8624 00:22:26.385 idle_polls: 5647 00:22:26.385 sock_completions: 2977 00:22:26.385 nvme_completions: 5703 00:22:26.385 submitted_requests: 8622 00:22:26.385 queued_requests: 1 00:22:26.385 00:22:26.385 ==================== 00:22:26.385 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:26.385 TCP transport: 00:22:26.385 polls: 8667 00:22:26.385 idle_polls: 5964 00:22:26.385 sock_completions: 2703 00:22:26.385 nvme_completions: 5319 00:22:26.385 submitted_requests: 8026 00:22:26.385 queued_requests: 1 00:22:26.385 ======================================================== 00:22:26.385 Latency(us) 00:22:26.385 Device Information : IOPS MiB/s Average min max 00:22:26.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1425.42 356.36 92783.31 44969.15 152290.62 00:22:26.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1329.43 332.36 97259.56 42405.30 158351.36 00:22:26.385 ======================================================== 00:22:26.385 Total : 2754.85 688.71 94943.45 42405.30 158351.36 00:22:26.385 00:22:26.385 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:26.385 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.643 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:26.643 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:26.644 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:26.644 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:26.644 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:26.644 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:26.644 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:26.644 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:26.644 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:26.644 rmmod nvme_tcp 00:22:26.644 rmmod nvme_fabrics 00:22:26.902 rmmod nvme_keyring 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 674799 ']' 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 674799 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 674799 ']' 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 674799 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 674799 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 674799' 00:22:26.902 killing process with pid 674799 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 674799 00:22:26.902 12:33:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 674799 00:22:28.273 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:28.273 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:28.273 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:28.533 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:28.533 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:28.533 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:28.533 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:28.533 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.533 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:28.533 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.534 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.534 12:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.440 12:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:30.440 00:22:30.440 real 0m21.809s 00:22:30.440 user 1m7.086s 00:22:30.440 sys 0m5.618s 00:22:30.440 12:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:30.440 12:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:30.440 ************************************ 00:22:30.440 END TEST nvmf_perf 00:22:30.440 ************************************ 00:22:30.440 12:34:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:30.440 12:34:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:30.440 12:34:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:30.440 12:34:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.440 ************************************ 00:22:30.440 START TEST nvmf_fio_host 00:22:30.440 ************************************ 00:22:30.440 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:30.440 * Looking for test storage... 00:22:30.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:30.440 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:30.440 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:30.440 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:30.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.700 --rc genhtml_branch_coverage=1 00:22:30.700 --rc genhtml_function_coverage=1 00:22:30.700 --rc genhtml_legend=1 00:22:30.700 --rc geninfo_all_blocks=1 00:22:30.700 --rc geninfo_unexecuted_blocks=1 00:22:30.700 00:22:30.700 ' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:30.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.700 --rc genhtml_branch_coverage=1 00:22:30.700 --rc genhtml_function_coverage=1 00:22:30.700 --rc genhtml_legend=1 00:22:30.700 --rc geninfo_all_blocks=1 00:22:30.700 --rc geninfo_unexecuted_blocks=1 00:22:30.700 00:22:30.700 ' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:30.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.700 --rc genhtml_branch_coverage=1 00:22:30.700 --rc genhtml_function_coverage=1 00:22:30.700 --rc genhtml_legend=1 00:22:30.700 --rc geninfo_all_blocks=1 00:22:30.700 --rc geninfo_unexecuted_blocks=1 00:22:30.700 00:22:30.700 ' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:30.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.700 --rc genhtml_branch_coverage=1 00:22:30.700 --rc genhtml_function_coverage=1 00:22:30.700 --rc genhtml_legend=1 00:22:30.700 --rc geninfo_all_blocks=1 00:22:30.700 --rc geninfo_unexecuted_blocks=1 00:22:30.700 00:22:30.700 ' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:30.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:30.700 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:30.701 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:30.701 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.701 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.701 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.701 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:30.701 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:30.701 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:30.701 12:34:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:32.710 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.710 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:32.711 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:32.711 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:32.711 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.711 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.970 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:22:32.971 00:22:32.971 --- 10.0.0.2 ping statistics --- 00:22:32.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.971 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:22:32.971 00:22:32.971 --- 10.0.0.1 ping statistics --- 00:22:32.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.971 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=678892 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 678892 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 678892 ']' 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:32.971 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.971 [2024-10-30 12:34:05.591764] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:22:32.971 [2024-10-30 12:34:05.591864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.229 [2024-10-30 12:34:05.667592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.229 [2024-10-30 12:34:05.731131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.229 [2024-10-30 12:34:05.731190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.229 [2024-10-30 12:34:05.731219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.229 [2024-10-30 12:34:05.731232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.229 [2024-10-30 12:34:05.731242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.229 [2024-10-30 12:34:05.732881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.229 [2024-10-30 12:34:05.732969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.229 [2024-10-30 12:34:05.733026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.229 [2024-10-30 12:34:05.733029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.229 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:33.229 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:22:33.229 12:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:33.487 [2024-10-30 12:34:06.165901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.745 12:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:33.745 12:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.745 12:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.745 12:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:34.003 Malloc1 00:22:34.003 12:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.261 12:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:34.518 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.083 [2024-10-30 12:34:07.460490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.083 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:35.341 12:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:35.341 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:35.341 fio-3.35 00:22:35.341 Starting 1 thread 00:22:37.870 00:22:37.870 test: (groupid=0, jobs=1): err= 0: pid=679258: Wed Oct 30 12:34:10 2024 00:22:37.870 read: IOPS=8895, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2006msec) 00:22:37.870 slat (usec): min=2, max=167, avg= 2.76, stdev= 1.99 00:22:37.870 clat (usec): min=2549, max=14068, avg=7868.02, stdev=647.98 00:22:37.870 lat (usec): min=2587, max=14071, avg=7870.78, stdev=647.89 00:22:37.870 clat percentiles (usec): 00:22:37.870 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7373], 00:22:37.870 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8029], 00:22:37.870 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:22:37.870 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[12387], 99.95th=[13304], 00:22:37.870 | 99.99th=[13960] 00:22:37.870 bw ( KiB/s): min=34744, max=36200, per=99.89%, avg=35542.00, stdev=655.62, samples=4 00:22:37.870 iops : min= 8686, max= 9050, avg=8885.50, stdev=163.91, samples=4 00:22:37.870 write: IOPS=8909, BW=34.8MiB/s (36.5MB/s)(69.8MiB/2006msec); 0 zone resets 00:22:37.870 slat (usec): min=2, max=136, avg= 2.86, stdev= 1.63 00:22:37.870 clat (usec): min=1465, max=11898, avg=6455.50, stdev=531.42 00:22:37.870 lat (usec): min=1475, max=11901, avg=6458.36, stdev=531.40 00:22:37.870 clat percentiles (usec): 00:22:37.870 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:22:37.870 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:22:37.870 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:22:37.870 | 99.00th=[ 7635], 99.50th=[ 7767], 99.90th=[10159], 99.95th=[11076], 00:22:37.870 | 99.99th=[11863] 00:22:37.870 bw ( KiB/s): min=35536, max=35880, per=100.00%, avg=35636.00, stdev=164.02, samples=4 00:22:37.870 iops : min= 8884, max= 8970, avg=8909.00, stdev=41.00, samples=4 00:22:37.870 lat (msec) : 2=0.03%, 4=0.11%, 10=99.68%, 20=0.18% 00:22:37.871 cpu : usr=64.24%, sys=34.11%, ctx=91, majf=0, minf=36 00:22:37.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:37.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:37.871 issued rwts: total=17844,17872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:37.871 00:22:37.871 Run status group 0 (all jobs): 00:22:37.871 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2006-2006msec 00:22:37.871 WRITE: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.8MiB (73.2MB), run=2006-2006msec 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:37.871 12:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:38.129 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:38.129 fio-3.35 00:22:38.129 Starting 1 thread 00:22:40.659 00:22:40.659 test: (groupid=0, jobs=1): err= 0: pid=679659: Wed Oct 30 12:34:13 2024 00:22:40.659 read: IOPS=8372, BW=131MiB/s (137MB/s)(263MiB/2009msec) 00:22:40.659 slat (nsec): min=2777, max=93739, avg=3645.99, stdev=1745.16 00:22:40.659 clat (usec): min=2268, max=16448, avg=8798.74, stdev=2008.48 00:22:40.659 lat (usec): min=2272, max=16451, avg=8802.38, stdev=2008.47 00:22:40.659 clat percentiles (usec): 00:22:40.659 | 1.00th=[ 4686], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 7111], 00:22:40.659 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9241], 00:22:40.659 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11338], 95.00th=[12256], 00:22:40.659 | 99.00th=[14222], 99.50th=[14615], 99.90th=[15926], 99.95th=[15926], 00:22:40.659 | 99.99th=[16057] 00:22:40.659 bw ( KiB/s): min=61856, max=76000, per=51.13%, avg=68496.00, stdev=7085.50, samples=4 00:22:40.659 iops : min= 3866, max= 4750, avg=4281.00, stdev=442.84, samples=4 00:22:40.659 write: IOPS=4928, BW=77.0MiB/s (80.7MB/s)(140MiB/1813msec); 0 zone resets 00:22:40.659 slat (usec): min=30, max=146, avg=34.15, stdev= 5.78 00:22:40.659 clat (usec): min=5065, max=21954, avg=11628.99, stdev=1984.54 00:22:40.659 lat (usec): min=5097, max=21987, avg=11663.13, stdev=1984.45 00:22:40.659 clat percentiles (usec): 00:22:40.659 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:22:40.659 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:22:40.659 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14353], 95.00th=[15270], 00:22:40.659 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18482], 99.95th=[18744], 00:22:40.659 | 99.99th=[21890] 00:22:40.659 bw ( KiB/s): min=62240, max=78240, per=90.09%, avg=71040.00, stdev=7620.48, samples=4 00:22:40.659 iops : min= 3890, max= 4890, avg=4440.00, stdev=476.28, samples=4 00:22:40.659 lat (msec) : 4=0.15%, 10=56.32%, 20=43.52%, 50=0.01% 00:22:40.659 cpu : usr=76.89%, sys=21.91%, ctx=40, majf=0, minf=63 00:22:40.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:40.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:40.659 issued rwts: total=16821,8935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:40.659 00:22:40.659 Run status group 0 (all jobs): 00:22:40.659 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (276MB), run=2009-2009msec 00:22:40.659 WRITE: bw=77.0MiB/s (80.7MB/s), 77.0MiB/s-77.0MiB/s (80.7MB/s-80.7MB/s), io=140MiB (146MB), run=1813-1813msec 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.659 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.659 rmmod nvme_tcp 00:22:40.918 rmmod nvme_fabrics 00:22:40.918 rmmod nvme_keyring 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 678892 ']' 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 678892 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 678892 ']' 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 678892 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 678892 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 678892' 00:22:40.918 killing process with pid 678892 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 678892 00:22:40.918 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 678892 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.176 12:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.083 12:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.083 00:22:43.083 real 0m12.677s 00:22:43.083 user 0m37.472s 00:22:43.083 sys 0m4.316s 00:22:43.083 12:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:43.083 12:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.083 ************************************ 00:22:43.083 END TEST nvmf_fio_host 00:22:43.083 ************************************ 00:22:43.083 12:34:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:43.083 12:34:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:43.083 12:34:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:43.083 12:34:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.343 ************************************ 00:22:43.343 START TEST nvmf_failover 00:22:43.343 ************************************ 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:43.343 * Looking for test storage... 00:22:43.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:43.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.343 --rc genhtml_branch_coverage=1 00:22:43.343 --rc genhtml_function_coverage=1 00:22:43.343 --rc genhtml_legend=1 00:22:43.343 --rc geninfo_all_blocks=1 00:22:43.343 --rc geninfo_unexecuted_blocks=1 00:22:43.343 00:22:43.343 ' 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:43.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.343 --rc genhtml_branch_coverage=1 00:22:43.343 --rc genhtml_function_coverage=1 00:22:43.343 --rc genhtml_legend=1 00:22:43.343 --rc geninfo_all_blocks=1 00:22:43.343 --rc geninfo_unexecuted_blocks=1 00:22:43.343 00:22:43.343 ' 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:43.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.343 --rc genhtml_branch_coverage=1 00:22:43.343 --rc genhtml_function_coverage=1 00:22:43.343 --rc genhtml_legend=1 00:22:43.343 --rc geninfo_all_blocks=1 00:22:43.343 --rc geninfo_unexecuted_blocks=1 00:22:43.343 00:22:43.343 ' 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:43.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.343 --rc genhtml_branch_coverage=1 00:22:43.343 --rc genhtml_function_coverage=1 00:22:43.343 --rc genhtml_legend=1 00:22:43.343 --rc geninfo_all_blocks=1 00:22:43.343 --rc geninfo_unexecuted_blocks=1 00:22:43.343 00:22:43.343 ' 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.343 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.344 12:34:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:45.876 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:45.877 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:45.877 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:45.877 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:45.877 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.877 12:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:45.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:22:45.877 00:22:45.877 --- 10.0.0.2 ping statistics --- 00:22:45.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.877 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:22:45.877 00:22:45.877 --- 10.0.0.1 ping statistics --- 00:22:45.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.877 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=681916 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 681916 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 681916 ']' 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:45.877 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:45.877 [2024-10-30 12:34:18.162474] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:22:45.877 [2024-10-30 12:34:18.162555] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.877 [2024-10-30 12:34:18.234135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:45.877 [2024-10-30 12:34:18.291901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.877 [2024-10-30 12:34:18.291964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.877 [2024-10-30 12:34:18.291992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.877 [2024-10-30 12:34:18.292003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.878 [2024-10-30 12:34:18.292012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.878 [2024-10-30 12:34:18.293513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.878 [2024-10-30 12:34:18.293575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.878 [2024-10-30 12:34:18.293578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.878 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.878 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:45.878 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.878 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.878 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:45.878 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.878 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:46.136 [2024-10-30 12:34:18.746229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.136 12:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:46.702 Malloc0 00:22:46.702 12:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.702 12:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:47.267 12:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:47.267 [2024-10-30 12:34:19.920903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.267 12:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:47.833 [2024-10-30 12:34:20.209858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:47.833 [2024-10-30 12:34:20.478707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=682204 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 682204 /var/tmp/bdevperf.sock 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 682204 ']' 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:47.833 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:48.399 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:48.399 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:48.399 12:34:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:48.656 NVMe0n1 00:22:48.656 12:34:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:49.221 00:22:49.221 12:34:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=682339 00:22:49.221 12:34:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.221 12:34:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:50.155 12:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.413 12:34:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:53.693 12:34:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:53.949 00:22:53.949 12:34:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:54.206 12:34:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:57.483 12:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.741 [2024-10-30 12:34:30.181357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.741 12:34:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:58.674 12:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:58.933 [2024-10-30 12:34:31.513840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.513917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.513933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.513946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.513958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.513979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.513992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 [2024-10-30 12:34:31.514163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197c220 is same with the state(6) to be set 00:22:58.933 12:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 682339 00:23:05.496 { 00:23:05.496 "results": [ 00:23:05.496 { 00:23:05.496 "job": "NVMe0n1", 00:23:05.496 "core_mask": "0x1", 00:23:05.496 "workload": "verify", 00:23:05.496 "status": "finished", 00:23:05.496 "verify_range": { 00:23:05.496 "start": 0, 00:23:05.496 "length": 16384 00:23:05.496 }, 00:23:05.496 "queue_depth": 128, 00:23:05.496 "io_size": 4096, 00:23:05.496 "runtime": 15.01069, 00:23:05.496 "iops": 8357.710405051334, 00:23:05.496 "mibps": 32.64730626973177, 00:23:05.496 "io_failed": 11124, 00:23:05.496 "io_timeout": 0, 00:23:05.496 "avg_latency_us": 14039.866248642422, 00:23:05.496 "min_latency_us": 652.325925925926, 00:23:05.496 "max_latency_us": 17767.53777777778 00:23:05.496 } 00:23:05.496 ], 00:23:05.496 "core_count": 1 00:23:05.496 } 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 682204 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 682204 ']' 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 682204 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 682204 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 682204' 00:23:05.496 killing process with pid 682204 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 682204 00:23:05.496 12:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 682204 00:23:05.496 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:05.496 [2024-10-30 12:34:20.547923] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:23:05.496 [2024-10-30 12:34:20.548007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid682204 ] 00:23:05.496 [2024-10-30 12:34:20.615319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.496 [2024-10-30 12:34:20.673624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.496 Running I/O for 15 seconds... 00:23:05.496 8451.00 IOPS, 33.01 MiB/s [2024-10-30T11:34:38.177Z] [2024-10-30 12:34:23.071446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.496 [2024-10-30 12:34:23.071511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.496 [2024-10-30 12:34:23.071539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.496 [2024-10-30 12:34:23.071554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.496 [2024-10-30 12:34:23.071570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.496 [2024-10-30 12:34:23.071584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.496 [2024-10-30 12:34:23.071599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.496 [2024-10-30 12:34:23.071613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.496 [2024-10-30 12:34:23.071629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.496 [2024-10-30 12:34:23.071643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.496 [2024-10-30 12:34:23.071658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.496 [2024-10-30 12:34:23.071672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.496 [2024-10-30 12:34:23.071687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.496 [2024-10-30 12:34:23.071701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.071716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.071729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.071744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.071757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.071772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.071786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.071801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.071814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.071842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.071857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.071871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.071885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.071899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.071913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.071928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.071941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.071956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.071969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.071984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.071997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.497 [2024-10-30 12:34:23.072703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.497 [2024-10-30 12:34:23.072731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.497 [2024-10-30 12:34:23.072759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.497 [2024-10-30 12:34:23.072787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.497 [2024-10-30 12:34:23.072815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.497 [2024-10-30 12:34:23.072843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.497 [2024-10-30 12:34:23.072871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.497 [2024-10-30 12:34:23.072899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.497 [2024-10-30 12:34:23.072914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.498 [2024-10-30 12:34:23.072927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.072946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.498 [2024-10-30 12:34:23.072961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.072975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.072989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.073980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.073995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.074009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.074024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.074041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.074056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.074070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.074084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.074098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.074112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.498 [2024-10-30 12:34:23.074126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.498 [2024-10-30 12:34:23.074140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.499 [2024-10-30 12:34:23.074154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.499 [2024-10-30 12:34:23.074182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.499 [2024-10-30 12:34:23.074210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.499 [2024-10-30 12:34:23.074238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.499 [2024-10-30 12:34:23.074275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.499 [2024-10-30 12:34:23.074310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.499 [2024-10-30 12:34:23.074339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.499 [2024-10-30 12:34:23.074367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83456 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83464 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83472 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83480 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83488 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83496 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83504 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83512 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83520 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83528 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83536 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83544 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.074963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.074975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.074986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.074997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83552 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.075009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.075021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.075032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.075043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83560 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.075055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.075068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.075079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.075089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83568 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.075101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.075119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.075130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.075141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83576 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.075153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.075166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.075176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.075190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83584 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.075203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.075215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.075226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.075237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83592 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.075249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.075269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.075281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.075292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83600 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.075304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.075316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.499 [2024-10-30 12:34:23.075327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.499 [2024-10-30 12:34:23.075338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83608 len:8 PRP1 0x0 PRP2 0x0 00:23:05.499 [2024-10-30 12:34:23.075350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.499 [2024-10-30 12:34:23.075362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.500 [2024-10-30 12:34:23.075373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.500 [2024-10-30 12:34:23.075384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83616 len:8 PRP1 0x0 PRP2 0x0 00:23:05.500 [2024-10-30 12:34:23.075396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.500 [2024-10-30 12:34:23.075419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.500 [2024-10-30 12:34:23.075430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83624 len:8 PRP1 0x0 PRP2 0x0 00:23:05.500 [2024-10-30 12:34:23.075442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.500 [2024-10-30 12:34:23.075466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.500 [2024-10-30 12:34:23.075476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83632 len:8 PRP1 0x0 PRP2 0x0 00:23:05.500 [2024-10-30 12:34:23.075488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.500 [2024-10-30 12:34:23.075517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.500 [2024-10-30 12:34:23.075528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82696 len:8 PRP1 0x0 PRP2 0x0 00:23:05.500 [2024-10-30 12:34:23.075541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.500 [2024-10-30 12:34:23.075568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.500 [2024-10-30 12:34:23.075579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82704 len:8 PRP1 0x0 PRP2 0x0 00:23:05.500 [2024-10-30 12:34:23.075592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.500 [2024-10-30 12:34:23.075615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.500 [2024-10-30 12:34:23.075625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82712 len:8 PRP1 0x0 PRP2 0x0 00:23:05.500 [2024-10-30 12:34:23.075637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.500 [2024-10-30 12:34:23.075661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.500 [2024-10-30 12:34:23.075671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82720 len:8 PRP1 0x0 PRP2 0x0 00:23:05.500 [2024-10-30 12:34:23.075683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.500 [2024-10-30 12:34:23.075706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.500 [2024-10-30 12:34:23.075717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82728 len:8 PRP1 0x0 PRP2 0x0 00:23:05.500 [2024-10-30 12:34:23.075729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.500 [2024-10-30 12:34:23.075752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.500 [2024-10-30 12:34:23.075762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82736 len:8 PRP1 0x0 PRP2 0x0 00:23:05.500 [2024-10-30 12:34:23.075775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.500 [2024-10-30 12:34:23.075797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.500 [2024-10-30 12:34:23.075808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82744 len:8 PRP1 0x0 PRP2 0x0 00:23:05.500 [2024-10-30 12:34:23.075820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075885] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:05.500 [2024-10-30 12:34:23.075922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.500 [2024-10-30 12:34:23.075940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.500 [2024-10-30 12:34:23.075968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.075982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.500 [2024-10-30 12:34:23.075999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.076013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.500 [2024-10-30 12:34:23.076025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:23.076038] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:05.500 [2024-10-30 12:34:23.076104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400560 (9): Bad file descriptor 00:23:05.500 [2024-10-30 12:34:23.079312] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:05.500 [2024-10-30 12:34:23.120687] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:05.500 8419.00 IOPS, 32.89 MiB/s [2024-10-30T11:34:38.181Z] 8487.33 IOPS, 33.15 MiB/s [2024-10-30T11:34:38.181Z] 8495.25 IOPS, 33.18 MiB/s [2024-10-30T11:34:38.181Z] [2024-10-30 12:34:26.844130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.500 [2024-10-30 12:34:26.844191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.500 [2024-10-30 12:34:26.844223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.500 [2024-10-30 12:34:26.844262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.500 [2024-10-30 12:34:26.844291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2400560 is same with the state(6) to be set 00:23:05.500 [2024-10-30 12:34:26.844357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.500 [2024-10-30 12:34:26.844378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.500 [2024-10-30 12:34:26.844415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.500 [2024-10-30 12:34:26.844444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.500 [2024-10-30 12:34:26.844473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.500 [2024-10-30 12:34:26.844510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.500 [2024-10-30 12:34:26.844540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.500 [2024-10-30 12:34:26.844568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.500 [2024-10-30 12:34:26.844598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.500 [2024-10-30 12:34:26.844611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.844984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.844997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.501 [2024-10-30 12:34:26.845415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.501 [2024-10-30 12:34:26.845818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.501 [2024-10-30 12:34:26.845831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.845845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.502 [2024-10-30 12:34:26.845858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.845872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.502 [2024-10-30 12:34:26.845885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.845899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.502 [2024-10-30 12:34:26.845912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.845926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.502 [2024-10-30 12:34:26.845939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.845954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.502 [2024-10-30 12:34:26.845967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.845984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.502 [2024-10-30 12:34:26.845998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.502 [2024-10-30 12:34:26.846025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.502 [2024-10-30 12:34:26.846053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.502 [2024-10-30 12:34:26.846081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.846984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.846998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.502 [2024-10-30 12:34:26.847011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.502 [2024-10-30 12:34:26.847025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.502 [2024-10-30 12:34:26.847038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.847975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.847990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.848003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.848018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.848031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.848046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.848059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.848073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.503 [2024-10-30 12:34:26.848086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.848115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.503 [2024-10-30 12:34:26.848130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.503 [2024-10-30 12:34:26.848142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:23:05.503 [2024-10-30 12:34:26.848154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:26.848215] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:05.503 [2024-10-30 12:34:26.848234] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:05.503 [2024-10-30 12:34:26.851497] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:05.503 [2024-10-30 12:34:26.851546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400560 (9): Bad file descriptor 00:23:05.503 8470.20 IOPS, 33.09 MiB/s [2024-10-30T11:34:38.184Z] [2024-10-30 12:34:27.015509] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:05.503 8297.33 IOPS, 32.41 MiB/s [2024-10-30T11:34:38.184Z] 8354.86 IOPS, 32.64 MiB/s [2024-10-30T11:34:38.184Z] 8398.75 IOPS, 32.81 MiB/s [2024-10-30T11:34:38.184Z] 8417.33 IOPS, 32.88 MiB/s [2024-10-30T11:34:38.184Z] [2024-10-30 12:34:31.516099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.503 [2024-10-30 12:34:31.516143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.503 [2024-10-30 12:34:31.516185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.504 [2024-10-30 12:34:31.516836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.516864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.516894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.516923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.516950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.516977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.516991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.504 [2024-10-30 12:34:31.517435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.504 [2024-10-30 12:34:31.517449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.517978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.517995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.518024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.518052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.518080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.518109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.518137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.518165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.518193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.505 [2024-10-30 12:34:31.518221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.505 [2024-10-30 12:34:31.518282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41216 len:8 PRP1 0x0 PRP2 0x0 00:23:05.505 [2024-10-30 12:34:31.518296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.505 [2024-10-30 12:34:31.518326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.505 [2024-10-30 12:34:31.518338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41224 len:8 PRP1 0x0 PRP2 0x0 00:23:05.505 [2024-10-30 12:34:31.518350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.505 [2024-10-30 12:34:31.518373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.505 [2024-10-30 12:34:31.518384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41232 len:8 PRP1 0x0 PRP2 0x0 00:23:05.505 [2024-10-30 12:34:31.518400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.505 [2024-10-30 12:34:31.518424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.505 [2024-10-30 12:34:31.518435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41240 len:8 PRP1 0x0 PRP2 0x0 00:23:05.505 [2024-10-30 12:34:31.518447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.505 [2024-10-30 12:34:31.518470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.505 [2024-10-30 12:34:31.518481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41248 len:8 PRP1 0x0 PRP2 0x0 00:23:05.505 [2024-10-30 12:34:31.518493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.505 [2024-10-30 12:34:31.518506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.505 [2024-10-30 12:34:31.518517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.505 [2024-10-30 12:34:31.518527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41256 len:8 PRP1 0x0 PRP2 0x0 00:23:05.505 [2024-10-30 12:34:31.518540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.518552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.518562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.518573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41264 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.518585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.518598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.518608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.518618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41272 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.518631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.518643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.518654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.518665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41280 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.518677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.518689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.518700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.518711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41288 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.518723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.518735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.518746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.518760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41296 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.518773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.518785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.518797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.518807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41304 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.518820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.518833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.518843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.518853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41312 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.518866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.518879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.518889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.518900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41320 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.518911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.518924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.518935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.518946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41328 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.518958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.518970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.518981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.518992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41336 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41344 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41352 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41360 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41368 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41376 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41384 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41392 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41400 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41408 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41416 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41424 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41432 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41440 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41448 len:8 PRP1 0x0 PRP2 0x0 00:23:05.506 [2024-10-30 12:34:31.519668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.506 [2024-10-30 12:34:31.519681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.506 [2024-10-30 12:34:31.519691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.506 [2024-10-30 12:34:31.519702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41456 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.519714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.519726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.519737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.519747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41464 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.519759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.519771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.519782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.519792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41472 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.519804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.519816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.519827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.519837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41480 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.519853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.519865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.519876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.519887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41488 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.519898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.519911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.519921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.519932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41496 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.519944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.519956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.519966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.519977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41504 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.519989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41512 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41520 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41528 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41536 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41544 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41552 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41560 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41568 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41576 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41584 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41592 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41600 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41608 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41616 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41624 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41632 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41640 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.507 [2024-10-30 12:34:31.520811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.507 [2024-10-30 12:34:31.520821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41648 len:8 PRP1 0x0 PRP2 0x0 00:23:05.507 [2024-10-30 12:34:31.520833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.507 [2024-10-30 12:34:31.520846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.508 [2024-10-30 12:34:31.520856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.508 [2024-10-30 12:34:31.520867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41656 len:8 PRP1 0x0 PRP2 0x0 00:23:05.508 [2024-10-30 12:34:31.520879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.508 [2024-10-30 12:34:31.520891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:05.508 [2024-10-30 12:34:31.520901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:05.508 [2024-10-30 12:34:31.520912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40824 len:8 PRP1 0x0 PRP2 0x0 00:23:05.508 [2024-10-30 12:34:31.520927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.508 [2024-10-30 12:34:31.520992] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:05.508 [2024-10-30 12:34:31.521030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.508 [2024-10-30 12:34:31.521048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.508 [2024-10-30 12:34:31.521063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.508 [2024-10-30 12:34:31.521075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.508 [2024-10-30 12:34:31.521088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.508 [2024-10-30 12:34:31.521101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.508 [2024-10-30 12:34:31.521114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.508 [2024-10-30 12:34:31.521126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.508 [2024-10-30 12:34:31.521139] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:05.508 [2024-10-30 12:34:31.521192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2400560 (9): Bad file descriptor 00:23:05.508 [2024-10-30 12:34:31.524400] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:05.508 [2024-10-30 12:34:31.694722] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:05.508 8275.70 IOPS, 32.33 MiB/s [2024-10-30T11:34:38.189Z] 8302.45 IOPS, 32.43 MiB/s [2024-10-30T11:34:38.189Z] 8314.83 IOPS, 32.48 MiB/s [2024-10-30T11:34:38.189Z] 8329.23 IOPS, 32.54 MiB/s [2024-10-30T11:34:38.189Z] 8352.43 IOPS, 32.63 MiB/s 00:23:05.508 Latency(us) 00:23:05.508 [2024-10-30T11:34:38.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.508 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:05.508 Verification LBA range: start 0x0 length 0x4000 00:23:05.508 NVMe0n1 : 15.01 8357.71 32.65 741.07 0.00 14039.87 652.33 17767.54 00:23:05.508 [2024-10-30T11:34:38.189Z] =================================================================================================================== 00:23:05.508 [2024-10-30T11:34:38.189Z] Total : 8357.71 32.65 741.07 0.00 14039.87 652.33 17767.54 00:23:05.508 Received shutdown signal, test time was about 15.000000 seconds 00:23:05.508 00:23:05.508 Latency(us) 00:23:05.508 [2024-10-30T11:34:38.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.508 [2024-10-30T11:34:38.189Z] =================================================================================================================== 00:23:05.508 [2024-10-30T11:34:38.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=684177 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 684177 /var/tmp/bdevperf.sock 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 684177 ']' 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:05.508 [2024-10-30 12:34:37.729824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:05.508 12:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:05.508 [2024-10-30 12:34:37.990689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:05.508 12:34:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:05.766 NVMe0n1 00:23:05.766 12:34:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:06.331 00:23:06.331 12:34:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:06.589 00:23:06.589 12:34:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.589 12:34:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:06.847 12:34:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.106 12:34:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:10.383 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:10.383 12:34:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:10.383 12:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=684847 00:23:10.383 12:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:10.383 12:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 684847 00:23:11.777 { 00:23:11.777 "results": [ 00:23:11.777 { 00:23:11.777 "job": "NVMe0n1", 00:23:11.777 "core_mask": "0x1", 00:23:11.777 "workload": "verify", 00:23:11.777 "status": "finished", 00:23:11.777 "verify_range": { 00:23:11.777 "start": 0, 00:23:11.777 "length": 16384 00:23:11.777 }, 00:23:11.777 "queue_depth": 128, 00:23:11.777 "io_size": 4096, 00:23:11.777 "runtime": 1.016493, 00:23:11.777 "iops": 8383.727187496619, 00:23:11.777 "mibps": 32.74893432615867, 00:23:11.777 "io_failed": 0, 00:23:11.777 "io_timeout": 0, 00:23:11.777 "avg_latency_us": 15208.320862256294, 00:23:11.777 "min_latency_us": 3021.9377777777777, 00:23:11.777 "max_latency_us": 13398.471111111112 00:23:11.777 } 00:23:11.777 ], 00:23:11.777 "core_count": 1 00:23:11.777 } 00:23:11.777 12:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:11.777 [2024-10-30 12:34:37.243968] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:23:11.777 [2024-10-30 12:34:37.244054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684177 ] 00:23:11.777 [2024-10-30 12:34:37.313324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.777 [2024-10-30 12:34:37.372854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.777 [2024-10-30 12:34:39.720195] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:11.777 [2024-10-30 12:34:39.720297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.777 [2024-10-30 12:34:39.720337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.777 [2024-10-30 12:34:39.720355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.777 [2024-10-30 12:34:39.720369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.777 [2024-10-30 12:34:39.720383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.777 [2024-10-30 12:34:39.720397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.777 [2024-10-30 12:34:39.720411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.777 [2024-10-30 12:34:39.720424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.777 [2024-10-30 12:34:39.720438] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:11.777 [2024-10-30 12:34:39.720487] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:11.777 [2024-10-30 12:34:39.720518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c6560 (9): Bad file descriptor 00:23:11.777 [2024-10-30 12:34:39.731148] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:11.777 Running I/O for 1 seconds... 00:23:11.777 8319.00 IOPS, 32.50 MiB/s 00:23:11.777 Latency(us) 00:23:11.777 [2024-10-30T11:34:44.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.777 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:11.777 Verification LBA range: start 0x0 length 0x4000 00:23:11.777 NVMe0n1 : 1.02 8383.73 32.75 0.00 0.00 15208.32 3021.94 13398.47 00:23:11.777 [2024-10-30T11:34:44.458Z] =================================================================================================================== 00:23:11.777 [2024-10-30T11:34:44.458Z] Total : 8383.73 32.75 0.00 0.00 15208.32 3021.94 13398.47 00:23:11.777 12:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.777 12:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:11.777 12:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.051 12:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.051 12:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:12.309 12:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.875 12:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 684177 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 684177 ']' 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 684177 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 684177 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 684177' 00:23:16.155 killing process with pid 684177 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 684177 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 684177 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:16.155 12:34:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.414 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:16.414 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.414 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:16.414 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:16.414 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:16.414 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:16.414 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:16.414 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:16.414 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:16.414 rmmod nvme_tcp 00:23:16.414 rmmod nvme_fabrics 00:23:16.672 rmmod nvme_keyring 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 681916 ']' 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 681916 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 681916 ']' 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 681916 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 681916 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 681916' 00:23:16.672 killing process with pid 681916 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 681916 00:23:16.672 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 681916 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.932 12:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.838 12:34:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.838 00:23:18.838 real 0m35.688s 00:23:18.838 user 2m6.190s 00:23:18.838 sys 0m5.985s 00:23:18.838 12:34:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:18.838 12:34:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:18.838 ************************************ 00:23:18.838 END TEST nvmf_failover 00:23:18.838 ************************************ 00:23:18.838 12:34:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:18.838 12:34:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:18.838 12:34:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:18.838 12:34:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.838 ************************************ 00:23:18.838 START TEST nvmf_host_discovery 00:23:18.838 ************************************ 00:23:18.838 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:19.097 * Looking for test storage... 00:23:19.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:19.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.097 --rc genhtml_branch_coverage=1 00:23:19.097 --rc genhtml_function_coverage=1 00:23:19.097 --rc genhtml_legend=1 00:23:19.097 --rc geninfo_all_blocks=1 00:23:19.097 --rc geninfo_unexecuted_blocks=1 00:23:19.097 00:23:19.097 ' 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:19.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.097 --rc genhtml_branch_coverage=1 00:23:19.097 --rc genhtml_function_coverage=1 00:23:19.097 --rc genhtml_legend=1 00:23:19.097 --rc geninfo_all_blocks=1 00:23:19.097 --rc geninfo_unexecuted_blocks=1 00:23:19.097 00:23:19.097 ' 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:19.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.097 --rc genhtml_branch_coverage=1 00:23:19.097 --rc genhtml_function_coverage=1 00:23:19.097 --rc genhtml_legend=1 00:23:19.097 --rc geninfo_all_blocks=1 00:23:19.097 --rc geninfo_unexecuted_blocks=1 00:23:19.097 00:23:19.097 ' 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:19.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.097 --rc genhtml_branch_coverage=1 00:23:19.097 --rc genhtml_function_coverage=1 00:23:19.097 --rc genhtml_legend=1 00:23:19.097 --rc geninfo_all_blocks=1 00:23:19.097 --rc geninfo_unexecuted_blocks=1 00:23:19.097 00:23:19.097 ' 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.097 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:19.098 12:34:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:21.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:21.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:21.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:21.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:21.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:23:21.636 00:23:21.636 --- 10.0.0.2 ping statistics --- 00:23:21.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.636 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:23:21.636 00:23:21.636 --- 10.0.0.1 ping statistics --- 00:23:21.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.636 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:21.636 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=687534 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 687534 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 687534 ']' 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:21.637 12:34:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.637 [2024-10-30 12:34:54.008590] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:23:21.637 [2024-10-30 12:34:54.008690] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.637 [2024-10-30 12:34:54.076779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.637 [2024-10-30 12:34:54.130131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.637 [2024-10-30 12:34:54.130200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.637 [2024-10-30 12:34:54.130229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.637 [2024-10-30 12:34:54.130240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.637 [2024-10-30 12:34:54.130249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.637 [2024-10-30 12:34:54.130837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.637 [2024-10-30 12:34:54.268529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.637 [2024-10-30 12:34:54.276767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.637 null0 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.637 null1 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=687601 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 687601 /tmp/host.sock 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 687601 ']' 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:21.637 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:21.637 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.895 [2024-10-30 12:34:54.351785] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:23:21.895 [2024-10-30 12:34:54.351867] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687601 ] 00:23:21.895 [2024-10-30 12:34:54.416456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.895 [2024-10-30 12:34:54.475091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.154 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 [2024-10-30 12:34:54.902434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.413 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.414 12:34:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:23:22.414 12:34:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:23.347 [2024-10-30 12:34:55.665858] bdev_nvme.c:7292:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:23.347 [2024-10-30 12:34:55.665883] bdev_nvme.c:7378:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:23.347 [2024-10-30 12:34:55.665904] bdev_nvme.c:7255:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.347 [2024-10-30 12:34:55.793334] bdev_nvme.c:7221:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:23.347 [2024-10-30 12:34:55.895147] bdev_nvme.c:5583:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:23.347 [2024-10-30 12:34:55.896185] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2370f40:1 started. 00:23:23.347 [2024-10-30 12:34:55.897937] bdev_nvme.c:7111:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:23.347 [2024-10-30 12:34:55.897956] bdev_nvme.c:7070:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:23.347 [2024-10-30 12:34:55.904393] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2370f40 was disconnected and freed. delete nvme_qpair. 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:23.606 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.607 [2024-10-30 12:34:56.258295] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2379500:1 started. 00:23:23.607 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.865 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:23.865 12:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:23.865 [2024-10-30 12:34:56.305869] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2379500 was disconnected and freed. delete nvme_qpair. 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.800 [2024-10-30 12:34:57.381835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.800 [2024-10-30 12:34:57.382726] bdev_nvme.c:7274:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:24.800 [2024-10-30 12:34:57.382776] bdev_nvme.c:7255:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.800 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:24.801 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.058 [2024-10-30 12:34:57.509081] bdev_nvme.c:7216:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:25.058 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:25.058 12:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:25.058 [2024-10-30 12:34:57.735525] bdev_nvme.c:5583:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:25.058 [2024-10-30 12:34:57.735593] bdev_nvme.c:7111:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:25.058 [2024-10-30 12:34:57.735614] bdev_nvme.c:7070:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:25.058 [2024-10-30 12:34:57.735622] bdev_nvme.c:7070:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.989 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.989 [2024-10-30 12:34:58.593954] bdev_nvme.c:7274:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:25.990 [2024-10-30 12:34:58.594009] bdev_nvme.c:7255:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.990 [2024-10-30 12:34:58.602808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.990 [2024-10-30 12:34:58.602840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.990 [2024-10-30 12:34:58.602873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.990 [2024-10-30 12:34:58.602887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.990 [2024-10-30 12:34:58.602901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.990 [2024-10-30 12:34:58.602914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.990 [2024-10-30 12:34:58.602927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.990 [2024-10-30 12:34:58.602940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.990 [2024-10-30 12:34:58.602953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341550 is same with the state(6) to be set 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.990 [2024-10-30 12:34:58.612798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341550 (9): Bad file descriptor 00:23:25.990 [2024-10-30 12:34:58.622837] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.990 [2024-10-30 12:34:58.622859] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.990 [2024-10-30 12:34:58.622868] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.990 [2024-10-30 12:34:58.622876] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.990 [2024-10-30 12:34:58.622921] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.990 [2024-10-30 12:34:58.623066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.990 [2024-10-30 12:34:58.623101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2341550 with addr=10.0.0.2, port=4420 00:23:25.990 [2024-10-30 12:34:58.623119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341550 is same with the state(6) to be set 00:23:25.990 [2024-10-30 12:34:58.623142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341550 (9): Bad file descriptor 00:23:25.990 [2024-10-30 12:34:58.623175] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.990 [2024-10-30 12:34:58.623193] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.990 [2024-10-30 12:34:58.623209] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.990 [2024-10-30 12:34:58.623221] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.990 [2024-10-30 12:34:58.623231] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.990 [2024-10-30 12:34:58.623266] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.990 [2024-10-30 12:34:58.632953] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.990 [2024-10-30 12:34:58.632974] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.990 [2024-10-30 12:34:58.632982] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.990 [2024-10-30 12:34:58.632989] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.990 [2024-10-30 12:34:58.633026] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.990 [2024-10-30 12:34:58.633236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.990 [2024-10-30 12:34:58.633273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2341550 with addr=10.0.0.2, port=4420 00:23:25.990 [2024-10-30 12:34:58.633290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341550 is same with the state(6) to be set 00:23:25.990 [2024-10-30 12:34:58.633313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341550 (9): Bad file descriptor 00:23:25.990 [2024-10-30 12:34:58.633345] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.990 [2024-10-30 12:34:58.633362] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.990 [2024-10-30 12:34:58.633375] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.990 [2024-10-30 12:34:58.633387] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.990 [2024-10-30 12:34:58.633396] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.990 [2024-10-30 12:34:58.633411] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.990 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.990 [2024-10-30 12:34:58.643200] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.990 [2024-10-30 12:34:58.643224] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.990 [2024-10-30 12:34:58.643247] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.990 [2024-10-30 12:34:58.643268] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.990 [2024-10-30 12:34:58.643297] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.990 [2024-10-30 12:34:58.643451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.990 [2024-10-30 12:34:58.643480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2341550 with addr=10.0.0.2, port=4420 00:23:25.990 [2024-10-30 12:34:58.643497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341550 is same with the state(6) to be set 00:23:25.990 [2024-10-30 12:34:58.643520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341550 (9): Bad file descriptor 00:23:25.990 [2024-10-30 12:34:58.643552] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.990 [2024-10-30 12:34:58.643570] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.990 [2024-10-30 12:34:58.643583] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.990 [2024-10-30 12:34:58.643594] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.990 [2024-10-30 12:34:58.643613] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.990 [2024-10-30 12:34:58.643629] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.990 [2024-10-30 12:34:58.653331] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.990 [2024-10-30 12:34:58.653355] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.990 [2024-10-30 12:34:58.653364] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.990 [2024-10-30 12:34:58.653372] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.990 [2024-10-30 12:34:58.653398] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.990 [2024-10-30 12:34:58.653501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.990 [2024-10-30 12:34:58.653528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2341550 with addr=10.0.0.2, port=4420 00:23:25.990 [2024-10-30 12:34:58.653555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341550 is same with the state(6) to be set 00:23:25.991 [2024-10-30 12:34:58.653582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341550 (9): Bad file descriptor 00:23:25.991 [2024-10-30 12:34:58.653616] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.991 [2024-10-30 12:34:58.653634] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.991 [2024-10-30 12:34:58.653647] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.991 [2024-10-30 12:34:58.653659] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.991 [2024-10-30 12:34:58.653668] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.991 [2024-10-30 12:34:58.653683] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.991 [2024-10-30 12:34:58.663432] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:25.991 [2024-10-30 12:34:58.663455] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:25.991 [2024-10-30 12:34:58.663465] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:25.991 [2024-10-30 12:34:58.663473] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:25.991 [2024-10-30 12:34:58.663498] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:25.991 [2024-10-30 12:34:58.663627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.991 [2024-10-30 12:34:58.663655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2341550 with addr=10.0.0.2, port=4420 00:23:25.991 [2024-10-30 12:34:58.663672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341550 is same with the state(6) to be set 00:23:25.991 [2024-10-30 12:34:58.663693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341550 (9): Bad file descriptor 00:23:25.991 [2024-10-30 12:34:58.663725] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:25.991 [2024-10-30 12:34:58.663742] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:25.991 [2024-10-30 12:34:58.663755] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:25.991 [2024-10-30 12:34:58.663767] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:25.991 [2024-10-30 12:34:58.663775] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:25.991 [2024-10-30 12:34:58.663790] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:25.991 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.249 [2024-10-30 12:34:58.673532] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:26.249 [2024-10-30 12:34:58.673570] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:26.249 [2024-10-30 12:34:58.673579] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:26.249 [2024-10-30 12:34:58.673586] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:26.249 [2024-10-30 12:34:58.673625] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:26.249 [2024-10-30 12:34:58.673775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.249 [2024-10-30 12:34:58.673802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2341550 with addr=10.0.0.2, port=4420 00:23:26.249 [2024-10-30 12:34:58.673827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2341550 is same with the state(6) to be set 00:23:26.249 [2024-10-30 12:34:58.673850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2341550 (9): Bad file descriptor 00:23:26.249 [2024-10-30 12:34:58.673883] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:26.249 [2024-10-30 12:34:58.673901] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:26.249 [2024-10-30 12:34:58.673914] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:26.249 [2024-10-30 12:34:58.673925] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:26.250 [2024-10-30 12:34:58.673934] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:26.250 [2024-10-30 12:34:58.673949] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.250 [2024-10-30 12:34:58.679894] bdev_nvme.c:7079:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:26.250 [2024-10-30 12:34:58.679921] bdev_nvme.c:7070:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.250 12:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.622 [2024-10-30 12:34:59.948427] bdev_nvme.c:7292:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:27.622 [2024-10-30 12:34:59.948450] bdev_nvme.c:7378:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:27.622 [2024-10-30 12:34:59.948470] bdev_nvme.c:7255:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:27.622 [2024-10-30 12:35:00.035817] bdev_nvme.c:7221:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:27.880 [2024-10-30 12:35:00.342698] bdev_nvme.c:5583:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:27.880 [2024-10-30 12:35:00.343517] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x233e270:1 started. 00:23:27.880 [2024-10-30 12:35:00.345612] bdev_nvme.c:7111:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:27.880 [2024-10-30 12:35:00.345649] bdev_nvme.c:7070:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:27.880 [2024-10-30 12:35:00.347213] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x233e270 was disconnected and freed. delete nvme_qpair. 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.880 request: 00:23:27.880 { 00:23:27.880 "name": "nvme", 00:23:27.880 "trtype": "tcp", 00:23:27.880 "traddr": "10.0.0.2", 00:23:27.880 "adrfam": "ipv4", 00:23:27.880 "trsvcid": "8009", 00:23:27.880 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:27.880 "wait_for_attach": true, 00:23:27.880 "method": "bdev_nvme_start_discovery", 00:23:27.880 "req_id": 1 00:23:27.880 } 00:23:27.880 Got JSON-RPC error response 00:23:27.880 response: 00:23:27.880 { 00:23:27.880 "code": -17, 00:23:27.880 "message": "File exists" 00:23:27.880 } 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.880 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.880 request: 00:23:27.880 { 00:23:27.880 "name": "nvme_second", 00:23:27.880 "trtype": "tcp", 00:23:27.880 "traddr": "10.0.0.2", 00:23:27.880 "adrfam": "ipv4", 00:23:27.880 "trsvcid": "8009", 00:23:27.880 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:27.880 "wait_for_attach": true, 00:23:27.880 "method": "bdev_nvme_start_discovery", 00:23:27.880 "req_id": 1 00:23:27.880 } 00:23:27.880 Got JSON-RPC error response 00:23:27.880 response: 00:23:27.880 { 00:23:27.880 "code": -17, 00:23:27.880 "message": "File exists" 00:23:27.880 } 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.881 12:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.255 [2024-10-30 12:35:01.540933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.255 [2024-10-30 12:35:01.540989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2358800 with addr=10.0.0.2, port=8010 00:23:29.255 [2024-10-30 12:35:01.541015] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:29.255 [2024-10-30 12:35:01.541028] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:29.255 [2024-10-30 12:35:01.541042] bdev_nvme.c:7360:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:30.189 [2024-10-30 12:35:02.543451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.189 [2024-10-30 12:35:02.543513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2358800 with addr=10.0.0.2, port=8010 00:23:30.189 [2024-10-30 12:35:02.543543] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:30.189 [2024-10-30 12:35:02.543557] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:30.189 [2024-10-30 12:35:02.543570] bdev_nvme.c:7360:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:31.124 [2024-10-30 12:35:03.545655] bdev_nvme.c:7335:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:31.124 request: 00:23:31.124 { 00:23:31.124 "name": "nvme_second", 00:23:31.124 "trtype": "tcp", 00:23:31.124 "traddr": "10.0.0.2", 00:23:31.124 "adrfam": "ipv4", 00:23:31.124 "trsvcid": "8010", 00:23:31.124 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:31.124 "wait_for_attach": false, 00:23:31.124 "attach_timeout_ms": 3000, 00:23:31.124 "method": "bdev_nvme_start_discovery", 00:23:31.124 "req_id": 1 00:23:31.124 } 00:23:31.124 Got JSON-RPC error response 00:23:31.124 response: 00:23:31.124 { 00:23:31.124 "code": -110, 00:23:31.124 "message": "Connection timed out" 00:23:31.124 } 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 687601 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.124 rmmod nvme_tcp 00:23:31.124 rmmod nvme_fabrics 00:23:31.124 rmmod nvme_keyring 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 687534 ']' 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 687534 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 687534 ']' 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 687534 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 687534 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 687534' 00:23:31.124 killing process with pid 687534 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 687534 00:23:31.124 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 687534 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.384 12:35:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.294 12:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.294 00:23:33.294 real 0m14.437s 00:23:33.294 user 0m21.318s 00:23:33.294 sys 0m2.939s 00:23:33.294 12:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:33.294 12:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.294 ************************************ 00:23:33.294 END TEST nvmf_host_discovery 00:23:33.294 ************************************ 00:23:33.294 12:35:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:33.294 12:35:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:33.294 12:35:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:33.294 12:35:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.553 ************************************ 00:23:33.553 START TEST nvmf_host_multipath_status 00:23:33.553 ************************************ 00:23:33.553 12:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:33.553 * Looking for test storage... 00:23:33.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:33.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.553 --rc genhtml_branch_coverage=1 00:23:33.553 --rc genhtml_function_coverage=1 00:23:33.553 --rc genhtml_legend=1 00:23:33.553 --rc geninfo_all_blocks=1 00:23:33.553 --rc geninfo_unexecuted_blocks=1 00:23:33.553 00:23:33.553 ' 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:33.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.553 --rc genhtml_branch_coverage=1 00:23:33.553 --rc genhtml_function_coverage=1 00:23:33.553 --rc genhtml_legend=1 00:23:33.553 --rc geninfo_all_blocks=1 00:23:33.553 --rc geninfo_unexecuted_blocks=1 00:23:33.553 00:23:33.553 ' 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:33.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.553 --rc genhtml_branch_coverage=1 00:23:33.553 --rc genhtml_function_coverage=1 00:23:33.553 --rc genhtml_legend=1 00:23:33.553 --rc geninfo_all_blocks=1 00:23:33.553 --rc geninfo_unexecuted_blocks=1 00:23:33.553 00:23:33.553 ' 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:33.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.553 --rc genhtml_branch_coverage=1 00:23:33.553 --rc genhtml_function_coverage=1 00:23:33.553 --rc genhtml_legend=1 00:23:33.553 --rc geninfo_all_blocks=1 00:23:33.553 --rc geninfo_unexecuted_blocks=1 00:23:33.553 00:23:33.553 ' 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.553 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.554 12:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:36.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:36.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.090 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:36.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:36.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:36.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:23:36.091 00:23:36.091 --- 10.0.0.2 ping statistics --- 00:23:36.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.091 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:23:36.091 00:23:36.091 --- 10.0.0.1 ping statistics --- 00:23:36.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.091 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=690785 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 690785 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 690785 ']' 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:36.091 [2024-10-30 12:35:08.440507] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:23:36.091 [2024-10-30 12:35:08.440608] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.091 [2024-10-30 12:35:08.511930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:36.091 [2024-10-30 12:35:08.570718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.091 [2024-10-30 12:35:08.570796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.091 [2024-10-30 12:35:08.570809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.091 [2024-10-30 12:35:08.570820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.091 [2024-10-30 12:35:08.570829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.091 [2024-10-30 12:35:08.575278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.091 [2024-10-30 12:35:08.575289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=690785 00:23:36.091 12:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:36.349 [2024-10-30 12:35:09.028606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.607 12:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:36.865 Malloc0 00:23:36.865 12:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:37.127 12:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.390 12:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.648 [2024-10-30 12:35:10.166854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.648 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:37.906 [2024-10-30 12:35:10.435514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:37.906 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=691068 00:23:37.906 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:37.906 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.906 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 691068 /var/tmp/bdevperf.sock 00:23:37.906 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 691068 ']' 00:23:37.906 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.906 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:37.906 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.906 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:37.906 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:38.164 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:38.164 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:38.164 12:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:38.421 12:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:38.987 Nvme0n1 00:23:38.988 12:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:39.245 Nvme0n1 00:23:39.503 12:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:39.503 12:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:41.401 12:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:41.401 12:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:41.657 12:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:42.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:43.150 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:43.150 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:43.150 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.150 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:43.407 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.407 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:43.407 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.407 12:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:43.666 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.666 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:43.666 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.666 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:43.924 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.924 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:43.925 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.925 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:44.182 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.182 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:44.182 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.182 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:44.440 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.440 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:44.440 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.440 12:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:44.698 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.698 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:44.698 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:44.957 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:45.214 12:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:46.149 12:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:46.149 12:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:46.149 12:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.149 12:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:46.448 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.448 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:46.448 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.448 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:46.736 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.736 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:46.736 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.736 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:46.994 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.994 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:46.994 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.994 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.251 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.251 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:47.251 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.251 12:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:47.815 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.815 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:47.815 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.815 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:47.815 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.815 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:47.815 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:48.381 12:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:48.381 12:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:49.755 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:49.755 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:49.755 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.755 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:49.755 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.755 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:49.755 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.755 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.014 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.014 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:50.014 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.014 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:50.272 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.272 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:50.272 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.272 12:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:50.530 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.530 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:50.530 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.530 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:50.788 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.788 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:50.788 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.788 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.047 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.047 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:51.047 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:51.305 12:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:51.563 12:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:52.938 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:52.938 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:52.938 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.938 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:52.938 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.938 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:52.938 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.938 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:53.196 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.196 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:53.196 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.196 12:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.454 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.454 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.454 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.454 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.712 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.712 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:53.712 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.712 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:53.970 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.970 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:53.970 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.970 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.227 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.227 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:54.227 12:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:54.485 12:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:54.742 12:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:56.114 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:56.114 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:56.114 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.114 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:56.114 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.114 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:56.114 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.114 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:56.372 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.372 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:56.372 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.372 12:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.630 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.630 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.630 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.630 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.888 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.888 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:56.888 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.888 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.150 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.150 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:57.150 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.150 12:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.409 12:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.409 12:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:57.409 12:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:57.666 12:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.924 12:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:59.299 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:59.299 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:59.299 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.299 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.299 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.299 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:59.299 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.299 12:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.558 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.558 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.558 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.558 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.816 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.816 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:59.816 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.816 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.074 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.074 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:00.074 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.074 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:00.332 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.332 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:00.332 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.332 12:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.590 12:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.590 12:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:00.848 12:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:00.848 12:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:01.107 12:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:01.365 12:35:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:02.739 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:02.739 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:02.739 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.739 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.739 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.739 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:02.739 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.739 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:02.997 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.997 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:02.997 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.997 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.255 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.255 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.255 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.255 12:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.513 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.513 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:03.513 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.513 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.772 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.772 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.772 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.772 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.030 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.030 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:04.030 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:04.288 12:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:04.854 12:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:05.790 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:05.790 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:05.790 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.790 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.048 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.048 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:06.048 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.048 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.307 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.307 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.307 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.307 12:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.564 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.564 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.564 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.565 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.822 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.822 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.823 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.823 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.080 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.080 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.080 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.080 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.338 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.338 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:07.338 12:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.596 12:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:07.854 12:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:09.229 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:09.229 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:09.229 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.229 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:09.229 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.229 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:09.229 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.229 12:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.523 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.523 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.523 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.524 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.810 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.810 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.810 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.810 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:10.068 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.068 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:10.068 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.068 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:10.327 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.327 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:10.327 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.327 12:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.585 12:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.585 12:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:10.585 12:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:10.843 12:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:11.100 12:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:12.034 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:12.034 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:12.034 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.034 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:12.600 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.600 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:12.600 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.600 12:35:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:12.600 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.600 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:12.600 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.600 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:12.858 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.858 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:12.858 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.858 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.117 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.117 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:13.117 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.117 12:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 691068 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 691068 ']' 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 691068 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.682 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 691068 00:24:13.939 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:13.939 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:13.939 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 691068' 00:24:13.939 killing process with pid 691068 00:24:13.939 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 691068 00:24:13.939 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 691068 00:24:13.939 { 00:24:13.939 "results": [ 00:24:13.939 { 00:24:13.939 "job": "Nvme0n1", 00:24:13.939 "core_mask": "0x4", 00:24:13.939 "workload": "verify", 00:24:13.939 "status": "terminated", 00:24:13.939 "verify_range": { 00:24:13.939 "start": 0, 00:24:13.939 "length": 16384 00:24:13.939 }, 00:24:13.939 "queue_depth": 128, 00:24:13.939 "io_size": 4096, 00:24:13.939 "runtime": 34.320633, 00:24:13.939 "iops": 7961.158525252142, 00:24:13.939 "mibps": 31.09827548926618, 00:24:13.939 "io_failed": 0, 00:24:13.939 "io_timeout": 0, 00:24:13.939 "avg_latency_us": 16033.335569132405, 00:24:13.939 "min_latency_us": 172.1837037037037, 00:24:13.939 "max_latency_us": 4026531.84 00:24:13.939 } 00:24:13.939 ], 00:24:13.939 "core_count": 1 00:24:13.939 } 00:24:14.208 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 691068 00:24:14.208 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:14.208 [2024-10-30 12:35:10.499588] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:24:14.208 [2024-10-30 12:35:10.499699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691068 ] 00:24:14.208 [2024-10-30 12:35:10.569489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.208 [2024-10-30 12:35:10.629918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.208 Running I/O for 90 seconds... 00:24:14.208 8380.00 IOPS, 32.73 MiB/s [2024-10-30T11:35:46.889Z] 8393.50 IOPS, 32.79 MiB/s [2024-10-30T11:35:46.890Z] 8394.00 IOPS, 32.79 MiB/s [2024-10-30T11:35:46.890Z] 8446.75 IOPS, 33.00 MiB/s [2024-10-30T11:35:46.890Z] 8452.60 IOPS, 33.02 MiB/s [2024-10-30T11:35:46.890Z] 8439.50 IOPS, 32.97 MiB/s [2024-10-30T11:35:46.890Z] 8434.43 IOPS, 32.95 MiB/s [2024-10-30T11:35:46.890Z] 8423.75 IOPS, 32.91 MiB/s [2024-10-30T11:35:46.890Z] 8405.33 IOPS, 32.83 MiB/s [2024-10-30T11:35:46.890Z] 8419.70 IOPS, 32.89 MiB/s [2024-10-30T11:35:46.890Z] 8415.91 IOPS, 32.87 MiB/s [2024-10-30T11:35:46.890Z] 8410.17 IOPS, 32.85 MiB/s [2024-10-30T11:35:46.890Z] 8421.54 IOPS, 32.90 MiB/s [2024-10-30T11:35:46.890Z] 8419.00 IOPS, 32.89 MiB/s [2024-10-30T11:35:46.890Z] 8424.40 IOPS, 32.91 MiB/s [2024-10-30T11:35:46.890Z] [2024-10-30 12:35:27.132298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.209 [2024-10-30 12:35:27.132363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.132968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.132985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.209 [2024-10-30 12:35:27.133914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:14.209 [2024-10-30 12:35:27.133937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.133953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.133975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.133992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.134695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.134712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.135354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.135404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.135455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.210 [2024-10-30 12:35:27.135498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.210 [2024-10-30 12:35:27.135541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.210 [2024-10-30 12:35:27.135584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.210 [2024-10-30 12:35:27.135626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.210 [2024-10-30 12:35:27.135669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.210 [2024-10-30 12:35:27.135711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.210 [2024-10-30 12:35:27.135769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.135826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.135869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.135912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.135955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.135981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.136002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.136029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.136046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.136072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.136088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.136114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.136131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.136157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.136173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.136199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.136216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.136242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.136268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:14.210 [2024-10-30 12:35:27.136298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.210 [2024-10-30 12:35:27.136315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.136955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.136971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.211 [2024-10-30 12:35:27.137525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.137966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.137982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.138009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.138025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.138052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.138068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.138096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.138112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.138139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.138155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.138183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.138199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.138226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.211 [2024-10-30 12:35:27.138266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:14.211 [2024-10-30 12:35:27.138310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:27.138329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:27.138358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:27.138379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:27.138408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:27.138425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:27.138454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:27.138471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:27.138499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:27.138515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:27.138544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:27.138577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:27.138605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:27.138621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:27.138648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:27.138664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:27.138691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:27.138707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:27.138736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:27.138752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:14.212 7928.62 IOPS, 30.97 MiB/s [2024-10-30T11:35:46.893Z] 7462.24 IOPS, 29.15 MiB/s [2024-10-30T11:35:46.893Z] 7047.67 IOPS, 27.53 MiB/s [2024-10-30T11:35:46.893Z] 6676.74 IOPS, 26.08 MiB/s [2024-10-30T11:35:46.893Z] 6745.90 IOPS, 26.35 MiB/s [2024-10-30T11:35:46.893Z] 6825.86 IOPS, 26.66 MiB/s [2024-10-30T11:35:46.893Z] 6946.05 IOPS, 27.13 MiB/s [2024-10-30T11:35:46.893Z] 7140.17 IOPS, 27.89 MiB/s [2024-10-30T11:35:46.893Z] 7315.67 IOPS, 28.58 MiB/s [2024-10-30T11:35:46.893Z] 7474.08 IOPS, 29.20 MiB/s [2024-10-30T11:35:46.893Z] 7508.08 IOPS, 29.33 MiB/s [2024-10-30T11:35:46.893Z] 7539.19 IOPS, 29.45 MiB/s [2024-10-30T11:35:46.893Z] 7564.71 IOPS, 29.55 MiB/s [2024-10-30T11:35:46.893Z] 7640.14 IOPS, 29.84 MiB/s [2024-10-30T11:35:46.893Z] 7754.30 IOPS, 30.29 MiB/s [2024-10-30T11:35:46.893Z] 7875.39 IOPS, 30.76 MiB/s [2024-10-30T11:35:46.893Z] [2024-10-30 12:35:43.667626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.667712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.667761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.667788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.667824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.667842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.667865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.667882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.667905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.212 [2024-10-30 12:35:43.667921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.667944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.212 [2024-10-30 12:35:43.667961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.667984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.212 [2024-10-30 12:35:43.668001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.212 [2024-10-30 12:35:43.668040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.212 [2024-10-30 12:35:43.668079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.212 [2024-10-30 12:35:43.668687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.212 [2024-10-30 12:35:43.668725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.212 [2024-10-30 12:35:43.668763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.212 [2024-10-30 12:35:43.668801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.212 [2024-10-30 12:35:43.668843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:14.212 [2024-10-30 12:35:43.668866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.212 [2024-10-30 12:35:43.668883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.668905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.668921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.668943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.668959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.668982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.668999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.669021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.669037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.669058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.669074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.669096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.669112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.669134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.669150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.669172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.669188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.669209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.669225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.669248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.669289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.670749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.670777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.670826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.670845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.670868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.670884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.670906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.670922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.670959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.670975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.670999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.671016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.671054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.671094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.671132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.671170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.671209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.671247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.671314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.671359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.671398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.671436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.671473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.671512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.671550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.671589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.213 [2024-10-30 12:35:43.671627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.671649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.671666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.672371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.672396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.672425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.672442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.672465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.213 [2024-10-30 12:35:43.672482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:14.213 [2024-10-30 12:35:43.672510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.672734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.672771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.672979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.672999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.673038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.673055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.673078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.673094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.673117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.673133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.673155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.673184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.673215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.673233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.673264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.673283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.673306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.673322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.673346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.673362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.673384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.673401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.674944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.674970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.674999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.675283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.675323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.675362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.675400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.675439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.675476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.214 [2024-10-30 12:35:43.675515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:14.214 [2024-10-30 12:35:43.675709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.214 [2024-10-30 12:35:43.675724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.675746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.675761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.675783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.675799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.675821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.675836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.675858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.675873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.675895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.675911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.675932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.675948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.675970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.675985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.676023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.676065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.676102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.676140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.676177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.676214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.676277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.676318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.676357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.676395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.676434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.676472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.676510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.676578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.676602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.676618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.678950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.678978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.679025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.679064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.679103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.679142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.679478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.215 [2024-10-30 12:35:43.679535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.215 [2024-10-30 12:35:43.679694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:14.215 [2024-10-30 12:35:43.679716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.679732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.679755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.679770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.679792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.679808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.679830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.679846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.679868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.679884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.679911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.679928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.679950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.679967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.679989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.680005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.680042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.680058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.680079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.680095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.680116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.680132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.680153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.680169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.680191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.680206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.680228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.680243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.680292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.680312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.680335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.680352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.681380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.681433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.681472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.681510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.681549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.681603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.681640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.681678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.681714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.681752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.681805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.681845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.681884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.681927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.681967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.681989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.682005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.682028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.682044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.682066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.682082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.682132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.682151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.682176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.216 [2024-10-30 12:35:43.682193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.682718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.682743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.682771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.682788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.682812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.682828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.682851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.216 [2024-10-30 12:35:43.682867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:14.216 [2024-10-30 12:35:43.682889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.682905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.682928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.682949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.682973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.682989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.683028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.683066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.683119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.683157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.683194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.683250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.683323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.683361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.683400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.683438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.683477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.683521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.683559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.683582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.683598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.685127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.685188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.685228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.685276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.685317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.685355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.685393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.685432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.685470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.685516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.685555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.685609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.685647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.685684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.685722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.685760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.217 [2024-10-30 12:35:43.685797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.685849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.217 [2024-10-30 12:35:43.685885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:14.217 [2024-10-30 12:35:43.685905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.685920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.685942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.685957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.685988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.686005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.686026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.686042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.686063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.686079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.686100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.686115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.686135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.686150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.686172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.686188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.686211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.686226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.686272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.686290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.687615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.687642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.687670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.687688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.687711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.687727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.687750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.687782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.687805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.687825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.687864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.687879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.687900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.687915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.687936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.687951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.687972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.687987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.688009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.688024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.689218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.689273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.689316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.689354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.689393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.689431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.689474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.689527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.689571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.689629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.689668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.689706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.689745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.689783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.689821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.689875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.689912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.689950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.689972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.689987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.690014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.218 [2024-10-30 12:35:43.690030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.690067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.218 [2024-10-30 12:35:43.690083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:14.218 [2024-10-30 12:35:43.690104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.690119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.690155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.690191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.690227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.690293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.690335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.690373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.690413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.690451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.690489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.690533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.690587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.690624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.690640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.692939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.692966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.692994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.693929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.693970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.693993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.219 [2024-10-30 12:35:43.694008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.694030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.219 [2024-10-30 12:35:43.694046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:14.219 [2024-10-30 12:35:43.694069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.694086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.694124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.694140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.694177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.694192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.694214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.694244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.694277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.694310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.694334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.694361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.694388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.694404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.694439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.694456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.694479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.694496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.694524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.694540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.695270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.695319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.695359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.695398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.695437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.695475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.695513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.695569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.695621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.695659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.695696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.695725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.695742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.697704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.697728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.697770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.697787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.697824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.697841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.697866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.697882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.697905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.697937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.697964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.697981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.698020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.698059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.698098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.698136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.698175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.698219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.698281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.698326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.698365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.220 [2024-10-30 12:35:43.698403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.220 [2024-10-30 12:35:43.698442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:14.220 [2024-10-30 12:35:43.698464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.698481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.698519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.698573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.698612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.698663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.698699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.698739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.698777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.698814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.698850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.698886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.698922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.698944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.698959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.700385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.700449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.700508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.700547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.700586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.700625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.700670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.700725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.700798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.700840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.700879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.700917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.700955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.700977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.700993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.701015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.701031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.701053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.701084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.701106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.701121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.701158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.701173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.701199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.701214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.701250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.221 [2024-10-30 12:35:43.701277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:14.221 [2024-10-30 12:35:43.701302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.221 [2024-10-30 12:35:43.701319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.701359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.701398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.701438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.701476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.701514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.701569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.701622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.701659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.701701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.701743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.701779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.701800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.701816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.702578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.702603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.702643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.702660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.702685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.702702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.704417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.704464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.704503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.704542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.704581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.704619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.704675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.704729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.704773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.704812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.704851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.704888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.704927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.704965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.704988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.705005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.705059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.705097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.705135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.705191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.705230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.705299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.705338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.705378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.222 [2024-10-30 12:35:43.705417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.222 [2024-10-30 12:35:43.705456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:14.222 [2024-10-30 12:35:43.705480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.705496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.705519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.705535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.705574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.705594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.705617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.705634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.705655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.705672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.705693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.705714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.705741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.705757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.705779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.705795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.705817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.705833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.705855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.705870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.705893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.705921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.708512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.708567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.708604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.708643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.708795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.708849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.708955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.708971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.710355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.710381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.710410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.710428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.710452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.710468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.710491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.710507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.710530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.710547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.710569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.710585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.710622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.710641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.710665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.223 [2024-10-30 12:35:43.710681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.710704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.710720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:14.223 [2024-10-30 12:35:43.710743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.223 [2024-10-30 12:35:43.710759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.710782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.710814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.710838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.710855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.710877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.710909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.710932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.710947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.710985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.711833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.711975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.711990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.712012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.712028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.712050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.712066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.712098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.712113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.712135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.712154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.712175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.712191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.712214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.712229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.713527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.224 [2024-10-30 12:35:43.713552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.713607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.713625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.713647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.713664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.713691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.713708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.713730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.713746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.713768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.224 [2024-10-30 12:35:43.713784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:14.224 [2024-10-30 12:35:43.713806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.713823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.713845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.713861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.716174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.716491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.716532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.716775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.716814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.716966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.716987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.717028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.717066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.717105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.717145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.717183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.717221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.717289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.717333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.717373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.717412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.717451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.717491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.717535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.717575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.225 [2024-10-30 12:35:43.717614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.717652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.717692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.717731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.717754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.225 [2024-10-30 12:35:43.717770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:14.225 [2024-10-30 12:35:43.718450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.718974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.718990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.719230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.719280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.719969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.719991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.720007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.720030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.720046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.720068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.720089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.720113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.720129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.720152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.720168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.720207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.720223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.720269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.226 [2024-10-30 12:35:43.720289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.720313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.720330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.720352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.720368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.720391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.226 [2024-10-30 12:35:43.720407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.226 [2024-10-30 12:35:43.720430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.720446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.720468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.720485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.720507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.720523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.720546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.720562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.720596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.720624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.720653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.720670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.720693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.720710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.720733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.720749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.722591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.722653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.722693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.722733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.722771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.722816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.722854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.722893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.722932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.722977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.722999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.723016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.723055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.723094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.723133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.723188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.723225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.723289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.723328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.723367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.723405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.723443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.723486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.723526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.723564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.723587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.723604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.726097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.726125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.726170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.227 [2024-10-30 12:35:43.726190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.726214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.726231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.726254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.726284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.726308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.726324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.726347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.726364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.726387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.726403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.726425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.227 [2024-10-30 12:35:43.726441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:14.227 [2024-10-30 12:35:43.726464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.228 [2024-10-30 12:35:43.726485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.228 [2024-10-30 12:35:43.726547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.228 [2024-10-30 12:35:43.726587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.228 [2024-10-30 12:35:43.726626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.228 [2024-10-30 12:35:43.726664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.228 [2024-10-30 12:35:43.726703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.228 [2024-10-30 12:35:43.726741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.228 [2024-10-30 12:35:43.726779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.228 [2024-10-30 12:35:43.726818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.228 [2024-10-30 12:35:43.726856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.228 [2024-10-30 12:35:43.726895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.228 [2024-10-30 12:35:43.726934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.726956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.228 [2024-10-30 12:35:43.726972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:14.228 [2024-10-30 12:35:43.727015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.228 [2024-10-30 12:35:43.727033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:14.228 7934.88 IOPS, 31.00 MiB/s [2024-10-30T11:35:46.909Z] 7951.15 IOPS, 31.06 MiB/s [2024-10-30T11:35:46.909Z] 7966.82 IOPS, 31.12 MiB/s [2024-10-30T11:35:46.909Z] Received shutdown signal, test time was about 34.321452 seconds 00:24:14.228 00:24:14.228 Latency(us) 00:24:14.228 [2024-10-30T11:35:46.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.228 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:14.228 Verification LBA range: start 0x0 length 0x4000 00:24:14.228 Nvme0n1 : 34.32 7961.16 31.10 0.00 0.00 16033.34 172.18 4026531.84 00:24:14.228 [2024-10-30T11:35:46.909Z] =================================================================================================================== 00:24:14.228 [2024-10-30T11:35:46.909Z] Total : 7961.16 31.10 0.00 0.00 16033.34 172.18 4026531.84 00:24:14.228 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.485 rmmod nvme_tcp 00:24:14.485 rmmod nvme_fabrics 00:24:14.485 rmmod nvme_keyring 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 690785 ']' 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 690785 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 690785 ']' 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 690785 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:14.485 12:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 690785 00:24:14.485 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:14.486 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:14.486 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 690785' 00:24:14.486 killing process with pid 690785 00:24:14.486 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 690785 00:24:14.486 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 690785 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.744 12:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.650 12:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.650 00:24:16.650 real 0m43.334s 00:24:16.650 user 2m10.271s 00:24:16.650 sys 0m11.718s 00:24:16.650 12:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:16.909 ************************************ 00:24:16.909 END TEST nvmf_host_multipath_status 00:24:16.909 ************************************ 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.909 ************************************ 00:24:16.909 START TEST nvmf_discovery_remove_ifc 00:24:16.909 ************************************ 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:16.909 * Looking for test storage... 00:24:16.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.909 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:16.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.910 --rc genhtml_branch_coverage=1 00:24:16.910 --rc genhtml_function_coverage=1 00:24:16.910 --rc genhtml_legend=1 00:24:16.910 --rc geninfo_all_blocks=1 00:24:16.910 --rc geninfo_unexecuted_blocks=1 00:24:16.910 00:24:16.910 ' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:16.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.910 --rc genhtml_branch_coverage=1 00:24:16.910 --rc genhtml_function_coverage=1 00:24:16.910 --rc genhtml_legend=1 00:24:16.910 --rc geninfo_all_blocks=1 00:24:16.910 --rc geninfo_unexecuted_blocks=1 00:24:16.910 00:24:16.910 ' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:16.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.910 --rc genhtml_branch_coverage=1 00:24:16.910 --rc genhtml_function_coverage=1 00:24:16.910 --rc genhtml_legend=1 00:24:16.910 --rc geninfo_all_blocks=1 00:24:16.910 --rc geninfo_unexecuted_blocks=1 00:24:16.910 00:24:16.910 ' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:16.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.910 --rc genhtml_branch_coverage=1 00:24:16.910 --rc genhtml_function_coverage=1 00:24:16.910 --rc genhtml_legend=1 00:24:16.910 --rc geninfo_all_blocks=1 00:24:16.910 --rc geninfo_unexecuted_blocks=1 00:24:16.910 00:24:16.910 ' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.910 12:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:19.439 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:19.439 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:19.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:19.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:19.439 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:19.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:24:19.440 00:24:19.440 --- 10.0.0.2 ping statistics --- 00:24:19.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.440 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:24:19.440 00:24:19.440 --- 10.0.0.1 ping statistics --- 00:24:19.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.440 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=697556 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 697556 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 697556 ']' 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:19.440 12:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.440 [2024-10-30 12:35:51.946797] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:24:19.440 [2024-10-30 12:35:51.946883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.440 [2024-10-30 12:35:52.016919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.440 [2024-10-30 12:35:52.072951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.440 [2024-10-30 12:35:52.073003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.440 [2024-10-30 12:35:52.073030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.440 [2024-10-30 12:35:52.073041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.440 [2024-10-30 12:35:52.073051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.440 [2024-10-30 12:35:52.073631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.697 [2024-10-30 12:35:52.227179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.697 [2024-10-30 12:35:52.235436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:19.697 null0 00:24:19.697 [2024-10-30 12:35:52.267341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=697581 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 697581 /tmp/host.sock 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 697581 ']' 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:19.697 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:19.697 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:19.698 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:19.698 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:19.698 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.698 [2024-10-30 12:35:52.338714] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:24:19.698 [2024-10-30 12:35:52.338792] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697581 ] 00:24:19.955 [2024-10-30 12:35:52.405705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.955 [2024-10-30 12:35:52.461696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.955 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.955 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:24:19.955 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.955 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:19.955 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.955 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.955 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.955 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:19.955 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.955 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.267 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.267 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:20.267 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.267 12:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.196 [2024-10-30 12:35:53.674684] bdev_nvme.c:7292:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:21.196 [2024-10-30 12:35:53.674717] bdev_nvme.c:7378:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:21.196 [2024-10-30 12:35:53.674743] bdev_nvme.c:7255:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.196 [2024-10-30 12:35:53.761020] bdev_nvme.c:7221:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:21.453 [2024-10-30 12:35:53.936168] bdev_nvme.c:5583:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:21.453 [2024-10-30 12:35:53.937341] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1689b90:1 started. 00:24:21.453 [2024-10-30 12:35:53.939069] bdev_nvme.c:8088:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:21.453 [2024-10-30 12:35:53.939129] bdev_nvme.c:8088:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:21.453 [2024-10-30 12:35:53.939162] bdev_nvme.c:8088:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:21.453 [2024-10-30 12:35:53.939183] bdev_nvme.c:7111:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:21.453 [2024-10-30 12:35:53.939222] bdev_nvme.c:7070:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.453 [2024-10-30 12:35:53.943548] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1689b90 was disconnected and freed. delete nvme_qpair. 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:21.453 12:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:21.453 12:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:22.822 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:22.822 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.822 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:22.822 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.822 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:22.822 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.822 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:22.822 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.822 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:22.822 12:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:23.754 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.754 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.754 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.754 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.754 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.754 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.754 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.754 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.754 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:23.754 12:35:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:24.687 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:24.687 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.687 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:24.687 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.687 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.687 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:24.687 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:24.687 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.687 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:24.687 12:35:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:25.619 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.619 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.619 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.619 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.619 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.619 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.619 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.619 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.619 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:25.619 12:35:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.990 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.990 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.990 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.990 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.990 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.990 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.990 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.990 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.990 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:26.990 12:35:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.990 [2024-10-30 12:35:59.380596] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:26.990 [2024-10-30 12:35:59.380680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.990 [2024-10-30 12:35:59.380716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.990 [2024-10-30 12:35:59.380734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.990 [2024-10-30 12:35:59.380747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.990 [2024-10-30 12:35:59.380760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.990 [2024-10-30 12:35:59.380772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.990 [2024-10-30 12:35:59.380785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.990 [2024-10-30 12:35:59.380797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.990 [2024-10-30 12:35:59.380810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:26.990 [2024-10-30 12:35:59.380822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.990 [2024-10-30 12:35:59.380834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1666400 is same with the state(6) to be set 00:24:26.990 [2024-10-30 12:35:59.390624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1666400 (9): Bad file descriptor 00:24:26.990 [2024-10-30 12:35:59.400664] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:26.991 [2024-10-30 12:35:59.400700] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:26.991 [2024-10-30 12:35:59.400710] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:26.991 [2024-10-30 12:35:59.400718] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:26.991 [2024-10-30 12:35:59.400769] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:27.921 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.921 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.921 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.921 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.921 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.921 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.921 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.921 [2024-10-30 12:36:00.447300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:27.921 [2024-10-30 12:36:00.447377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1666400 with addr=10.0.0.2, port=4420 00:24:27.921 [2024-10-30 12:36:00.447403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1666400 is same with the state(6) to be set 00:24:27.921 [2024-10-30 12:36:00.447453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1666400 (9): Bad file descriptor 00:24:27.921 [2024-10-30 12:36:00.447944] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:27.921 [2024-10-30 12:36:00.447985] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:27.921 [2024-10-30 12:36:00.448001] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:27.921 [2024-10-30 12:36:00.448016] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:27.921 [2024-10-30 12:36:00.448029] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:27.921 [2024-10-30 12:36:00.448039] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:27.922 [2024-10-30 12:36:00.448061] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:27.922 [2024-10-30 12:36:00.448077] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:27.922 [2024-10-30 12:36:00.448086] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:27.922 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.922 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:27.922 12:36:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:28.854 [2024-10-30 12:36:01.450577] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:28.854 [2024-10-30 12:36:01.450640] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:28.854 [2024-10-30 12:36:01.450663] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:28.855 [2024-10-30 12:36:01.450690] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:28.855 [2024-10-30 12:36:01.450702] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:28.855 [2024-10-30 12:36:01.450715] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:28.855 [2024-10-30 12:36:01.450724] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:28.855 [2024-10-30 12:36:01.450745] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:28.855 [2024-10-30 12:36:01.450779] bdev_nvme.c:7043:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:28.855 [2024-10-30 12:36:01.450833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.855 [2024-10-30 12:36:01.450854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.855 [2024-10-30 12:36:01.450871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.855 [2024-10-30 12:36:01.450883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.855 [2024-10-30 12:36:01.450896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.855 [2024-10-30 12:36:01.450916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.855 [2024-10-30 12:36:01.450930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.855 [2024-10-30 12:36:01.450941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.855 [2024-10-30 12:36:01.450954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.855 [2024-10-30 12:36:01.450965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.855 [2024-10-30 12:36:01.450978] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:28.855 [2024-10-30 12:36:01.451024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1655b40 (9): Bad file descriptor 00:24:28.855 [2024-10-30 12:36:01.452014] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:28.855 [2024-10-30 12:36:01.452035] nvme_ctrlr.c:1190:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.855 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.113 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.113 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.113 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:29.113 12:36:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.045 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.045 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.045 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.045 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.045 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.045 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.045 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.045 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.045 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:30.045 12:36:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.977 [2024-10-30 12:36:03.470115] bdev_nvme.c:7292:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:30.977 [2024-10-30 12:36:03.470148] bdev_nvme.c:7378:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:30.977 [2024-10-30 12:36:03.470170] bdev_nvme.c:7255:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:30.977 [2024-10-30 12:36:03.558447] bdev_nvme.c:7221:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:30.977 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.977 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.977 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.977 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.977 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.977 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.977 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.977 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.235 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:31.235 12:36:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:31.235 [2024-10-30 12:36:03.781665] bdev_nvme.c:5583:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:31.235 [2024-10-30 12:36:03.782492] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1670ab0:1 started. 00:24:31.235 [2024-10-30 12:36:03.783817] bdev_nvme.c:8088:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:31.235 [2024-10-30 12:36:03.783861] bdev_nvme.c:8088:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:31.235 [2024-10-30 12:36:03.783894] bdev_nvme.c:8088:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:31.235 [2024-10-30 12:36:03.783914] bdev_nvme.c:7111:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:31.235 [2024-10-30 12:36:03.783925] bdev_nvme.c:7070:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:31.235 [2024-10-30 12:36:03.789465] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1670ab0 was disconnected and freed. delete nvme_qpair. 00:24:32.170 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.170 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.170 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.170 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 697581 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 697581 ']' 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 697581 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 697581 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 697581' 00:24:32.171 killing process with pid 697581 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 697581 00:24:32.171 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 697581 00:24:32.429 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:32.429 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:32.429 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:32.429 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.429 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:32.429 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.429 12:36:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:32.429 rmmod nvme_tcp 00:24:32.429 rmmod nvme_fabrics 00:24:32.429 rmmod nvme_keyring 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 697556 ']' 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 697556 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 697556 ']' 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 697556 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 697556 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 697556' 00:24:32.429 killing process with pid 697556 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 697556 00:24:32.429 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 697556 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.687 12:36:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:35.224 00:24:35.224 real 0m17.953s 00:24:35.224 user 0m25.855s 00:24:35.224 sys 0m3.043s 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.224 ************************************ 00:24:35.224 END TEST nvmf_discovery_remove_ifc 00:24:35.224 ************************************ 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.224 ************************************ 00:24:35.224 START TEST nvmf_identify_kernel_target 00:24:35.224 ************************************ 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:35.224 * Looking for test storage... 00:24:35.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:35.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.224 --rc genhtml_branch_coverage=1 00:24:35.224 --rc genhtml_function_coverage=1 00:24:35.224 --rc genhtml_legend=1 00:24:35.224 --rc geninfo_all_blocks=1 00:24:35.224 --rc geninfo_unexecuted_blocks=1 00:24:35.224 00:24:35.224 ' 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:35.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.224 --rc genhtml_branch_coverage=1 00:24:35.224 --rc genhtml_function_coverage=1 00:24:35.224 --rc genhtml_legend=1 00:24:35.224 --rc geninfo_all_blocks=1 00:24:35.224 --rc geninfo_unexecuted_blocks=1 00:24:35.224 00:24:35.224 ' 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:35.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.224 --rc genhtml_branch_coverage=1 00:24:35.224 --rc genhtml_function_coverage=1 00:24:35.224 --rc genhtml_legend=1 00:24:35.224 --rc geninfo_all_blocks=1 00:24:35.224 --rc geninfo_unexecuted_blocks=1 00:24:35.224 00:24:35.224 ' 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:35.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.224 --rc genhtml_branch_coverage=1 00:24:35.224 --rc genhtml_function_coverage=1 00:24:35.224 --rc genhtml_legend=1 00:24:35.224 --rc geninfo_all_blocks=1 00:24:35.224 --rc geninfo_unexecuted_blocks=1 00:24:35.224 00:24:35.224 ' 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.224 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:35.225 12:36:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:37.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:37.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:37.126 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:37.126 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:37.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:24:37.126 00:24:37.126 --- 10.0.0.2 ping statistics --- 00:24:37.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.126 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:24:37.126 00:24:37.126 --- 10.0.0.1 ping statistics --- 00:24:37.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.126 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:37.126 12:36:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:38.561 Waiting for block devices as requested 00:24:38.561 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:38.561 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:38.561 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:38.561 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:38.825 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:38.826 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:38.826 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:38.826 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:39.086 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:39.086 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:39.086 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:39.345 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:39.345 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:39.345 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:39.345 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:39.603 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:39.603 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:39.603 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:39.603 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:39.603 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:39.603 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:39.603 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:39.603 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:39.603 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:39.603 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:39.603 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:39.862 No valid GPT data, bailing 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:39.862 00:24:39.862 Discovery Log Number of Records 2, Generation counter 2 00:24:39.862 =====Discovery Log Entry 0====== 00:24:39.862 trtype: tcp 00:24:39.862 adrfam: ipv4 00:24:39.862 subtype: current discovery subsystem 00:24:39.862 treq: not specified, sq flow control disable supported 00:24:39.862 portid: 1 00:24:39.862 trsvcid: 4420 00:24:39.862 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:39.862 traddr: 10.0.0.1 00:24:39.862 eflags: none 00:24:39.862 sectype: none 00:24:39.862 =====Discovery Log Entry 1====== 00:24:39.862 trtype: tcp 00:24:39.862 adrfam: ipv4 00:24:39.862 subtype: nvme subsystem 00:24:39.862 treq: not specified, sq flow control disable supported 00:24:39.862 portid: 1 00:24:39.862 trsvcid: 4420 00:24:39.862 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:39.862 traddr: 10.0.0.1 00:24:39.862 eflags: none 00:24:39.862 sectype: none 00:24:39.862 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:39.862 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:40.122 ===================================================== 00:24:40.122 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:40.122 ===================================================== 00:24:40.122 Controller Capabilities/Features 00:24:40.122 ================================ 00:24:40.122 Vendor ID: 0000 00:24:40.122 Subsystem Vendor ID: 0000 00:24:40.122 Serial Number: dd8161549a6e9f45aca2 00:24:40.122 Model Number: Linux 00:24:40.122 Firmware Version: 6.8.9-20 00:24:40.122 Recommended Arb Burst: 0 00:24:40.122 IEEE OUI Identifier: 00 00 00 00:24:40.122 Multi-path I/O 00:24:40.122 May have multiple subsystem ports: No 00:24:40.122 May have multiple controllers: No 00:24:40.122 Associated with SR-IOV VF: No 00:24:40.122 Max Data Transfer Size: Unlimited 00:24:40.122 Max Number of Namespaces: 0 00:24:40.122 Max Number of I/O Queues: 1024 00:24:40.122 NVMe Specification Version (VS): 1.3 00:24:40.122 NVMe Specification Version (Identify): 1.3 00:24:40.122 Maximum Queue Entries: 1024 00:24:40.122 Contiguous Queues Required: No 00:24:40.122 Arbitration Mechanisms Supported 00:24:40.122 Weighted Round Robin: Not Supported 00:24:40.122 Vendor Specific: Not Supported 00:24:40.122 Reset Timeout: 7500 ms 00:24:40.122 Doorbell Stride: 4 bytes 00:24:40.122 NVM Subsystem Reset: Not Supported 00:24:40.122 Command Sets Supported 00:24:40.122 NVM Command Set: Supported 00:24:40.122 Boot Partition: Not Supported 00:24:40.122 Memory Page Size Minimum: 4096 bytes 00:24:40.122 Memory Page Size Maximum: 4096 bytes 00:24:40.122 Persistent Memory Region: Not Supported 00:24:40.122 Optional Asynchronous Events Supported 00:24:40.122 Namespace Attribute Notices: Not Supported 00:24:40.122 Firmware Activation Notices: Not Supported 00:24:40.122 ANA Change Notices: Not Supported 00:24:40.122 PLE Aggregate Log Change Notices: Not Supported 00:24:40.122 LBA Status Info Alert Notices: Not Supported 00:24:40.122 EGE Aggregate Log Change Notices: Not Supported 00:24:40.122 Normal NVM Subsystem Shutdown event: Not Supported 00:24:40.122 Zone Descriptor Change Notices: Not Supported 00:24:40.122 Discovery Log Change Notices: Supported 00:24:40.122 Controller Attributes 00:24:40.122 128-bit Host Identifier: Not Supported 00:24:40.122 Non-Operational Permissive Mode: Not Supported 00:24:40.122 NVM Sets: Not Supported 00:24:40.122 Read Recovery Levels: Not Supported 00:24:40.122 Endurance Groups: Not Supported 00:24:40.122 Predictable Latency Mode: Not Supported 00:24:40.122 Traffic Based Keep ALive: Not Supported 00:24:40.122 Namespace Granularity: Not Supported 00:24:40.122 SQ Associations: Not Supported 00:24:40.122 UUID List: Not Supported 00:24:40.122 Multi-Domain Subsystem: Not Supported 00:24:40.122 Fixed Capacity Management: Not Supported 00:24:40.122 Variable Capacity Management: Not Supported 00:24:40.122 Delete Endurance Group: Not Supported 00:24:40.122 Delete NVM Set: Not Supported 00:24:40.122 Extended LBA Formats Supported: Not Supported 00:24:40.122 Flexible Data Placement Supported: Not Supported 00:24:40.122 00:24:40.122 Controller Memory Buffer Support 00:24:40.122 ================================ 00:24:40.122 Supported: No 00:24:40.122 00:24:40.122 Persistent Memory Region Support 00:24:40.122 ================================ 00:24:40.122 Supported: No 00:24:40.122 00:24:40.122 Admin Command Set Attributes 00:24:40.122 ============================ 00:24:40.122 Security Send/Receive: Not Supported 00:24:40.122 Format NVM: Not Supported 00:24:40.122 Firmware Activate/Download: Not Supported 00:24:40.122 Namespace Management: Not Supported 00:24:40.122 Device Self-Test: Not Supported 00:24:40.122 Directives: Not Supported 00:24:40.122 NVMe-MI: Not Supported 00:24:40.122 Virtualization Management: Not Supported 00:24:40.122 Doorbell Buffer Config: Not Supported 00:24:40.122 Get LBA Status Capability: Not Supported 00:24:40.122 Command & Feature Lockdown Capability: Not Supported 00:24:40.122 Abort Command Limit: 1 00:24:40.122 Async Event Request Limit: 1 00:24:40.122 Number of Firmware Slots: N/A 00:24:40.122 Firmware Slot 1 Read-Only: N/A 00:24:40.122 Firmware Activation Without Reset: N/A 00:24:40.122 Multiple Update Detection Support: N/A 00:24:40.122 Firmware Update Granularity: No Information Provided 00:24:40.123 Per-Namespace SMART Log: No 00:24:40.123 Asymmetric Namespace Access Log Page: Not Supported 00:24:40.123 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:40.123 Command Effects Log Page: Not Supported 00:24:40.123 Get Log Page Extended Data: Supported 00:24:40.123 Telemetry Log Pages: Not Supported 00:24:40.123 Persistent Event Log Pages: Not Supported 00:24:40.123 Supported Log Pages Log Page: May Support 00:24:40.123 Commands Supported & Effects Log Page: Not Supported 00:24:40.123 Feature Identifiers & Effects Log Page:May Support 00:24:40.123 NVMe-MI Commands & Effects Log Page: May Support 00:24:40.123 Data Area 4 for Telemetry Log: Not Supported 00:24:40.123 Error Log Page Entries Supported: 1 00:24:40.123 Keep Alive: Not Supported 00:24:40.123 00:24:40.123 NVM Command Set Attributes 00:24:40.123 ========================== 00:24:40.123 Submission Queue Entry Size 00:24:40.123 Max: 1 00:24:40.123 Min: 1 00:24:40.123 Completion Queue Entry Size 00:24:40.123 Max: 1 00:24:40.123 Min: 1 00:24:40.123 Number of Namespaces: 0 00:24:40.123 Compare Command: Not Supported 00:24:40.123 Write Uncorrectable Command: Not Supported 00:24:40.123 Dataset Management Command: Not Supported 00:24:40.123 Write Zeroes Command: Not Supported 00:24:40.123 Set Features Save Field: Not Supported 00:24:40.123 Reservations: Not Supported 00:24:40.123 Timestamp: Not Supported 00:24:40.123 Copy: Not Supported 00:24:40.123 Volatile Write Cache: Not Present 00:24:40.123 Atomic Write Unit (Normal): 1 00:24:40.123 Atomic Write Unit (PFail): 1 00:24:40.123 Atomic Compare & Write Unit: 1 00:24:40.123 Fused Compare & Write: Not Supported 00:24:40.123 Scatter-Gather List 00:24:40.123 SGL Command Set: Supported 00:24:40.123 SGL Keyed: Not Supported 00:24:40.123 SGL Bit Bucket Descriptor: Not Supported 00:24:40.123 SGL Metadata Pointer: Not Supported 00:24:40.123 Oversized SGL: Not Supported 00:24:40.123 SGL Metadata Address: Not Supported 00:24:40.123 SGL Offset: Supported 00:24:40.123 Transport SGL Data Block: Not Supported 00:24:40.123 Replay Protected Memory Block: Not Supported 00:24:40.123 00:24:40.123 Firmware Slot Information 00:24:40.123 ========================= 00:24:40.123 Active slot: 0 00:24:40.123 00:24:40.123 00:24:40.123 Error Log 00:24:40.123 ========= 00:24:40.123 00:24:40.123 Active Namespaces 00:24:40.123 ================= 00:24:40.123 Discovery Log Page 00:24:40.123 ================== 00:24:40.123 Generation Counter: 2 00:24:40.123 Number of Records: 2 00:24:40.123 Record Format: 0 00:24:40.123 00:24:40.123 Discovery Log Entry 0 00:24:40.123 ---------------------- 00:24:40.123 Transport Type: 3 (TCP) 00:24:40.123 Address Family: 1 (IPv4) 00:24:40.123 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:40.123 Entry Flags: 00:24:40.123 Duplicate Returned Information: 0 00:24:40.123 Explicit Persistent Connection Support for Discovery: 0 00:24:40.123 Transport Requirements: 00:24:40.123 Secure Channel: Not Specified 00:24:40.123 Port ID: 1 (0x0001) 00:24:40.123 Controller ID: 65535 (0xffff) 00:24:40.123 Admin Max SQ Size: 32 00:24:40.123 Transport Service Identifier: 4420 00:24:40.123 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:40.123 Transport Address: 10.0.0.1 00:24:40.123 Discovery Log Entry 1 00:24:40.123 ---------------------- 00:24:40.123 Transport Type: 3 (TCP) 00:24:40.123 Address Family: 1 (IPv4) 00:24:40.123 Subsystem Type: 2 (NVM Subsystem) 00:24:40.123 Entry Flags: 00:24:40.123 Duplicate Returned Information: 0 00:24:40.123 Explicit Persistent Connection Support for Discovery: 0 00:24:40.123 Transport Requirements: 00:24:40.123 Secure Channel: Not Specified 00:24:40.123 Port ID: 1 (0x0001) 00:24:40.123 Controller ID: 65535 (0xffff) 00:24:40.123 Admin Max SQ Size: 32 00:24:40.123 Transport Service Identifier: 4420 00:24:40.123 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:40.123 Transport Address: 10.0.0.1 00:24:40.123 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.123 get_feature(0x01) failed 00:24:40.123 get_feature(0x02) failed 00:24:40.123 get_feature(0x04) failed 00:24:40.123 ===================================================== 00:24:40.123 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:40.123 ===================================================== 00:24:40.123 Controller Capabilities/Features 00:24:40.123 ================================ 00:24:40.123 Vendor ID: 0000 00:24:40.123 Subsystem Vendor ID: 0000 00:24:40.123 Serial Number: b5a059dd61cdc59461ff 00:24:40.123 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:40.123 Firmware Version: 6.8.9-20 00:24:40.123 Recommended Arb Burst: 6 00:24:40.123 IEEE OUI Identifier: 00 00 00 00:24:40.123 Multi-path I/O 00:24:40.123 May have multiple subsystem ports: Yes 00:24:40.123 May have multiple controllers: Yes 00:24:40.123 Associated with SR-IOV VF: No 00:24:40.123 Max Data Transfer Size: Unlimited 00:24:40.123 Max Number of Namespaces: 1024 00:24:40.123 Max Number of I/O Queues: 128 00:24:40.123 NVMe Specification Version (VS): 1.3 00:24:40.123 NVMe Specification Version (Identify): 1.3 00:24:40.123 Maximum Queue Entries: 1024 00:24:40.123 Contiguous Queues Required: No 00:24:40.123 Arbitration Mechanisms Supported 00:24:40.123 Weighted Round Robin: Not Supported 00:24:40.123 Vendor Specific: Not Supported 00:24:40.123 Reset Timeout: 7500 ms 00:24:40.123 Doorbell Stride: 4 bytes 00:24:40.123 NVM Subsystem Reset: Not Supported 00:24:40.123 Command Sets Supported 00:24:40.123 NVM Command Set: Supported 00:24:40.123 Boot Partition: Not Supported 00:24:40.123 Memory Page Size Minimum: 4096 bytes 00:24:40.123 Memory Page Size Maximum: 4096 bytes 00:24:40.123 Persistent Memory Region: Not Supported 00:24:40.123 Optional Asynchronous Events Supported 00:24:40.123 Namespace Attribute Notices: Supported 00:24:40.123 Firmware Activation Notices: Not Supported 00:24:40.123 ANA Change Notices: Supported 00:24:40.123 PLE Aggregate Log Change Notices: Not Supported 00:24:40.123 LBA Status Info Alert Notices: Not Supported 00:24:40.123 EGE Aggregate Log Change Notices: Not Supported 00:24:40.123 Normal NVM Subsystem Shutdown event: Not Supported 00:24:40.123 Zone Descriptor Change Notices: Not Supported 00:24:40.123 Discovery Log Change Notices: Not Supported 00:24:40.124 Controller Attributes 00:24:40.124 128-bit Host Identifier: Supported 00:24:40.124 Non-Operational Permissive Mode: Not Supported 00:24:40.124 NVM Sets: Not Supported 00:24:40.124 Read Recovery Levels: Not Supported 00:24:40.124 Endurance Groups: Not Supported 00:24:40.124 Predictable Latency Mode: Not Supported 00:24:40.124 Traffic Based Keep ALive: Supported 00:24:40.124 Namespace Granularity: Not Supported 00:24:40.124 SQ Associations: Not Supported 00:24:40.124 UUID List: Not Supported 00:24:40.124 Multi-Domain Subsystem: Not Supported 00:24:40.124 Fixed Capacity Management: Not Supported 00:24:40.124 Variable Capacity Management: Not Supported 00:24:40.124 Delete Endurance Group: Not Supported 00:24:40.124 Delete NVM Set: Not Supported 00:24:40.124 Extended LBA Formats Supported: Not Supported 00:24:40.124 Flexible Data Placement Supported: Not Supported 00:24:40.124 00:24:40.124 Controller Memory Buffer Support 00:24:40.124 ================================ 00:24:40.124 Supported: No 00:24:40.124 00:24:40.124 Persistent Memory Region Support 00:24:40.124 ================================ 00:24:40.124 Supported: No 00:24:40.124 00:24:40.124 Admin Command Set Attributes 00:24:40.124 ============================ 00:24:40.124 Security Send/Receive: Not Supported 00:24:40.124 Format NVM: Not Supported 00:24:40.124 Firmware Activate/Download: Not Supported 00:24:40.124 Namespace Management: Not Supported 00:24:40.124 Device Self-Test: Not Supported 00:24:40.124 Directives: Not Supported 00:24:40.124 NVMe-MI: Not Supported 00:24:40.124 Virtualization Management: Not Supported 00:24:40.124 Doorbell Buffer Config: Not Supported 00:24:40.124 Get LBA Status Capability: Not Supported 00:24:40.124 Command & Feature Lockdown Capability: Not Supported 00:24:40.124 Abort Command Limit: 4 00:24:40.124 Async Event Request Limit: 4 00:24:40.124 Number of Firmware Slots: N/A 00:24:40.124 Firmware Slot 1 Read-Only: N/A 00:24:40.124 Firmware Activation Without Reset: N/A 00:24:40.124 Multiple Update Detection Support: N/A 00:24:40.124 Firmware Update Granularity: No Information Provided 00:24:40.124 Per-Namespace SMART Log: Yes 00:24:40.124 Asymmetric Namespace Access Log Page: Supported 00:24:40.124 ANA Transition Time : 10 sec 00:24:40.124 00:24:40.124 Asymmetric Namespace Access Capabilities 00:24:40.124 ANA Optimized State : Supported 00:24:40.124 ANA Non-Optimized State : Supported 00:24:40.124 ANA Inaccessible State : Supported 00:24:40.124 ANA Persistent Loss State : Supported 00:24:40.124 ANA Change State : Supported 00:24:40.124 ANAGRPID is not changed : No 00:24:40.124 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:40.124 00:24:40.124 ANA Group Identifier Maximum : 128 00:24:40.124 Number of ANA Group Identifiers : 128 00:24:40.124 Max Number of Allowed Namespaces : 1024 00:24:40.124 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:40.124 Command Effects Log Page: Supported 00:24:40.124 Get Log Page Extended Data: Supported 00:24:40.124 Telemetry Log Pages: Not Supported 00:24:40.124 Persistent Event Log Pages: Not Supported 00:24:40.124 Supported Log Pages Log Page: May Support 00:24:40.124 Commands Supported & Effects Log Page: Not Supported 00:24:40.124 Feature Identifiers & Effects Log Page:May Support 00:24:40.124 NVMe-MI Commands & Effects Log Page: May Support 00:24:40.124 Data Area 4 for Telemetry Log: Not Supported 00:24:40.124 Error Log Page Entries Supported: 128 00:24:40.124 Keep Alive: Supported 00:24:40.124 Keep Alive Granularity: 1000 ms 00:24:40.124 00:24:40.124 NVM Command Set Attributes 00:24:40.124 ========================== 00:24:40.124 Submission Queue Entry Size 00:24:40.124 Max: 64 00:24:40.124 Min: 64 00:24:40.124 Completion Queue Entry Size 00:24:40.124 Max: 16 00:24:40.124 Min: 16 00:24:40.124 Number of Namespaces: 1024 00:24:40.124 Compare Command: Not Supported 00:24:40.124 Write Uncorrectable Command: Not Supported 00:24:40.124 Dataset Management Command: Supported 00:24:40.124 Write Zeroes Command: Supported 00:24:40.124 Set Features Save Field: Not Supported 00:24:40.124 Reservations: Not Supported 00:24:40.124 Timestamp: Not Supported 00:24:40.124 Copy: Not Supported 00:24:40.124 Volatile Write Cache: Present 00:24:40.124 Atomic Write Unit (Normal): 1 00:24:40.124 Atomic Write Unit (PFail): 1 00:24:40.124 Atomic Compare & Write Unit: 1 00:24:40.124 Fused Compare & Write: Not Supported 00:24:40.124 Scatter-Gather List 00:24:40.124 SGL Command Set: Supported 00:24:40.124 SGL Keyed: Not Supported 00:24:40.124 SGL Bit Bucket Descriptor: Not Supported 00:24:40.124 SGL Metadata Pointer: Not Supported 00:24:40.124 Oversized SGL: Not Supported 00:24:40.124 SGL Metadata Address: Not Supported 00:24:40.124 SGL Offset: Supported 00:24:40.124 Transport SGL Data Block: Not Supported 00:24:40.124 Replay Protected Memory Block: Not Supported 00:24:40.124 00:24:40.124 Firmware Slot Information 00:24:40.124 ========================= 00:24:40.124 Active slot: 0 00:24:40.124 00:24:40.124 Asymmetric Namespace Access 00:24:40.124 =========================== 00:24:40.124 Change Count : 0 00:24:40.124 Number of ANA Group Descriptors : 1 00:24:40.124 ANA Group Descriptor : 0 00:24:40.124 ANA Group ID : 1 00:24:40.124 Number of NSID Values : 1 00:24:40.124 Change Count : 0 00:24:40.124 ANA State : 1 00:24:40.124 Namespace Identifier : 1 00:24:40.124 00:24:40.124 Commands Supported and Effects 00:24:40.124 ============================== 00:24:40.124 Admin Commands 00:24:40.124 -------------- 00:24:40.124 Get Log Page (02h): Supported 00:24:40.124 Identify (06h): Supported 00:24:40.124 Abort (08h): Supported 00:24:40.124 Set Features (09h): Supported 00:24:40.124 Get Features (0Ah): Supported 00:24:40.124 Asynchronous Event Request (0Ch): Supported 00:24:40.124 Keep Alive (18h): Supported 00:24:40.124 I/O Commands 00:24:40.124 ------------ 00:24:40.124 Flush (00h): Supported 00:24:40.124 Write (01h): Supported LBA-Change 00:24:40.124 Read (02h): Supported 00:24:40.124 Write Zeroes (08h): Supported LBA-Change 00:24:40.124 Dataset Management (09h): Supported 00:24:40.124 00:24:40.124 Error Log 00:24:40.124 ========= 00:24:40.124 Entry: 0 00:24:40.124 Error Count: 0x3 00:24:40.124 Submission Queue Id: 0x0 00:24:40.124 Command Id: 0x5 00:24:40.124 Phase Bit: 0 00:24:40.124 Status Code: 0x2 00:24:40.124 Status Code Type: 0x0 00:24:40.124 Do Not Retry: 1 00:24:40.125 Error Location: 0x28 00:24:40.125 LBA: 0x0 00:24:40.125 Namespace: 0x0 00:24:40.125 Vendor Log Page: 0x0 00:24:40.125 ----------- 00:24:40.125 Entry: 1 00:24:40.125 Error Count: 0x2 00:24:40.125 Submission Queue Id: 0x0 00:24:40.125 Command Id: 0x5 00:24:40.125 Phase Bit: 0 00:24:40.125 Status Code: 0x2 00:24:40.125 Status Code Type: 0x0 00:24:40.125 Do Not Retry: 1 00:24:40.125 Error Location: 0x28 00:24:40.125 LBA: 0x0 00:24:40.125 Namespace: 0x0 00:24:40.125 Vendor Log Page: 0x0 00:24:40.125 ----------- 00:24:40.125 Entry: 2 00:24:40.125 Error Count: 0x1 00:24:40.125 Submission Queue Id: 0x0 00:24:40.125 Command Id: 0x4 00:24:40.125 Phase Bit: 0 00:24:40.125 Status Code: 0x2 00:24:40.125 Status Code Type: 0x0 00:24:40.125 Do Not Retry: 1 00:24:40.125 Error Location: 0x28 00:24:40.125 LBA: 0x0 00:24:40.125 Namespace: 0x0 00:24:40.125 Vendor Log Page: 0x0 00:24:40.125 00:24:40.125 Number of Queues 00:24:40.125 ================ 00:24:40.125 Number of I/O Submission Queues: 128 00:24:40.125 Number of I/O Completion Queues: 128 00:24:40.125 00:24:40.125 ZNS Specific Controller Data 00:24:40.125 ============================ 00:24:40.125 Zone Append Size Limit: 0 00:24:40.125 00:24:40.125 00:24:40.125 Active Namespaces 00:24:40.125 ================= 00:24:40.125 get_feature(0x05) failed 00:24:40.125 Namespace ID:1 00:24:40.125 Command Set Identifier: NVM (00h) 00:24:40.125 Deallocate: Supported 00:24:40.125 Deallocated/Unwritten Error: Not Supported 00:24:40.125 Deallocated Read Value: Unknown 00:24:40.125 Deallocate in Write Zeroes: Not Supported 00:24:40.125 Deallocated Guard Field: 0xFFFF 00:24:40.125 Flush: Supported 00:24:40.125 Reservation: Not Supported 00:24:40.125 Namespace Sharing Capabilities: Multiple Controllers 00:24:40.125 Size (in LBAs): 1953525168 (931GiB) 00:24:40.125 Capacity (in LBAs): 1953525168 (931GiB) 00:24:40.125 Utilization (in LBAs): 1953525168 (931GiB) 00:24:40.125 UUID: 81acfee1-b911-4f52-bc96-751e76359e2e 00:24:40.125 Thin Provisioning: Not Supported 00:24:40.125 Per-NS Atomic Units: Yes 00:24:40.125 Atomic Boundary Size (Normal): 0 00:24:40.125 Atomic Boundary Size (PFail): 0 00:24:40.125 Atomic Boundary Offset: 0 00:24:40.125 NGUID/EUI64 Never Reused: No 00:24:40.125 ANA group ID: 1 00:24:40.125 Namespace Write Protected: No 00:24:40.125 Number of LBA Formats: 1 00:24:40.125 Current LBA Format: LBA Format #00 00:24:40.125 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:40.125 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.125 rmmod nvme_tcp 00:24:40.125 rmmod nvme_fabrics 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.125 12:36:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:42.662 12:36:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:43.598 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:43.598 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:43.598 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:43.598 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:43.598 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:43.598 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:43.598 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:43.598 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:43.598 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:43.598 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:43.598 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:43.598 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:43.598 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:43.598 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:43.598 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:43.598 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:44.537 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:44.537 00:24:44.537 real 0m9.805s 00:24:44.537 user 0m2.134s 00:24:44.537 sys 0m3.577s 00:24:44.537 12:36:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:44.537 12:36:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.537 ************************************ 00:24:44.537 END TEST nvmf_identify_kernel_target 00:24:44.537 ************************************ 00:24:44.794 12:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:44.794 12:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:44.794 12:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:44.794 12:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.794 ************************************ 00:24:44.794 START TEST nvmf_auth_host 00:24:44.794 ************************************ 00:24:44.794 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:44.795 * Looking for test storage... 00:24:44.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:44.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.795 --rc genhtml_branch_coverage=1 00:24:44.795 --rc genhtml_function_coverage=1 00:24:44.795 --rc genhtml_legend=1 00:24:44.795 --rc geninfo_all_blocks=1 00:24:44.795 --rc geninfo_unexecuted_blocks=1 00:24:44.795 00:24:44.795 ' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:44.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.795 --rc genhtml_branch_coverage=1 00:24:44.795 --rc genhtml_function_coverage=1 00:24:44.795 --rc genhtml_legend=1 00:24:44.795 --rc geninfo_all_blocks=1 00:24:44.795 --rc geninfo_unexecuted_blocks=1 00:24:44.795 00:24:44.795 ' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:44.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.795 --rc genhtml_branch_coverage=1 00:24:44.795 --rc genhtml_function_coverage=1 00:24:44.795 --rc genhtml_legend=1 00:24:44.795 --rc geninfo_all_blocks=1 00:24:44.795 --rc geninfo_unexecuted_blocks=1 00:24:44.795 00:24:44.795 ' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:44.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.795 --rc genhtml_branch_coverage=1 00:24:44.795 --rc genhtml_function_coverage=1 00:24:44.795 --rc genhtml_legend=1 00:24:44.795 --rc geninfo_all_blocks=1 00:24:44.795 --rc geninfo_unexecuted_blocks=1 00:24:44.795 00:24:44.795 ' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:44.795 12:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:47.323 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.323 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:47.323 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:47.324 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:47.324 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:24:47.324 00:24:47.324 --- 10.0.0.2 ping statistics --- 00:24:47.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.324 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:24:47.324 00:24:47.324 --- 10.0.0.1 ping statistics --- 00:24:47.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.324 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=705416 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 705416 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 705416 ']' 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d496473839666915d3d04338ef3793a4 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pp9 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d496473839666915d3d04338ef3793a4 0 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d496473839666915d3d04338ef3793a4 0 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d496473839666915d3d04338ef3793a4 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:47.324 12:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:47.582 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pp9 00:24:47.582 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pp9 00:24:47.582 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pp9 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fbbbdf78a252c7a0ea7bae32642f73b3236d40890e839ed4ed3c553791d00708 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tvk 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fbbbdf78a252c7a0ea7bae32642f73b3236d40890e839ed4ed3c553791d00708 3 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fbbbdf78a252c7a0ea7bae32642f73b3236d40890e839ed4ed3c553791d00708 3 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fbbbdf78a252c7a0ea7bae32642f73b3236d40890e839ed4ed3c553791d00708 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tvk 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tvk 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.tvk 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cbb46641037e0bcfeb64a9e2fe35a911bf065537736f97c1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eR1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cbb46641037e0bcfeb64a9e2fe35a911bf065537736f97c1 0 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cbb46641037e0bcfeb64a9e2fe35a911bf065537736f97c1 0 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cbb46641037e0bcfeb64a9e2fe35a911bf065537736f97c1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eR1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eR1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.eR1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=379407eb98b72c4de4ed205a1ebac21af0dd839b06d6e00c 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jve 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 379407eb98b72c4de4ed205a1ebac21af0dd839b06d6e00c 2 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 379407eb98b72c4de4ed205a1ebac21af0dd839b06d6e00c 2 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=379407eb98b72c4de4ed205a1ebac21af0dd839b06d6e00c 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jve 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jve 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jve 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9ba8ad2dcf9c7f2721d4040c110f163c 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.XJJ 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9ba8ad2dcf9c7f2721d4040c110f163c 1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9ba8ad2dcf9c7f2721d4040c110f163c 1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9ba8ad2dcf9c7f2721d4040c110f163c 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.XJJ 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.XJJ 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.XJJ 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=22b327c024e123856d869a6174f124ff 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3fF 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 22b327c024e123856d869a6174f124ff 1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 22b327c024e123856d869a6174f124ff 1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=22b327c024e123856d869a6174f124ff 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:47.583 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3fF 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3fF 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.3fF 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dd3b7af1d38baf3076132902f9a6e41a7cefe4f30f440de5 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.bgG 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dd3b7af1d38baf3076132902f9a6e41a7cefe4f30f440de5 2 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dd3b7af1d38baf3076132902f9a6e41a7cefe4f30f440de5 2 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dd3b7af1d38baf3076132902f9a6e41a7cefe4f30f440de5 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.bgG 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.bgG 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.bgG 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=35b7444fcaffd9287b039d4dd95668f8 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uDL 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 35b7444fcaffd9287b039d4dd95668f8 0 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 35b7444fcaffd9287b039d4dd95668f8 0 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=35b7444fcaffd9287b039d4dd95668f8 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uDL 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uDL 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.uDL 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4ab15f4f55960b0071de4dfcf913a069e467d6b27a31ddeb4a890eefe42831f7 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mXR 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4ab15f4f55960b0071de4dfcf913a069e467d6b27a31ddeb4a890eefe42831f7 3 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4ab15f4f55960b0071de4dfcf913a069e467d6b27a31ddeb4a890eefe42831f7 3 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4ab15f4f55960b0071de4dfcf913a069e467d6b27a31ddeb4a890eefe42831f7 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mXR 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mXR 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mXR 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 705416 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 705416 ']' 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.841 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:47.842 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.842 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:47.842 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pp9 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.tvk ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tvk 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.eR1 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jve ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jve 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.XJJ 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.3fF ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3fF 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.bgG 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.uDL ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.uDL 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mXR 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:48.100 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:48.358 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:48.358 12:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:49.289 Waiting for block devices as requested 00:24:49.289 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:49.546 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:49.546 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:49.803 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:49.803 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:49.803 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:50.059 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:50.059 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:50.059 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:50.059 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:50.315 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:50.315 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:50.315 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:50.315 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:50.587 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:50.587 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:50.587 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:51.151 No valid GPT data, bailing 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:51.151 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:51.152 00:24:51.152 Discovery Log Number of Records 2, Generation counter 2 00:24:51.152 =====Discovery Log Entry 0====== 00:24:51.152 trtype: tcp 00:24:51.152 adrfam: ipv4 00:24:51.152 subtype: current discovery subsystem 00:24:51.152 treq: not specified, sq flow control disable supported 00:24:51.152 portid: 1 00:24:51.152 trsvcid: 4420 00:24:51.152 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:51.152 traddr: 10.0.0.1 00:24:51.152 eflags: none 00:24:51.152 sectype: none 00:24:51.152 =====Discovery Log Entry 1====== 00:24:51.152 trtype: tcp 00:24:51.152 adrfam: ipv4 00:24:51.152 subtype: nvme subsystem 00:24:51.152 treq: not specified, sq flow control disable supported 00:24:51.152 portid: 1 00:24:51.152 trsvcid: 4420 00:24:51.152 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:51.152 traddr: 10.0.0.1 00:24:51.152 eflags: none 00:24:51.152 sectype: none 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.152 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.410 nvme0n1 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.410 12:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.668 nvme0n1 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.668 nvme0n1 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.668 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.925 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.926 nvme0n1 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.926 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.183 nvme0n1 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.183 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.184 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.184 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.184 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.184 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.184 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.184 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.184 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.184 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.184 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.442 nvme0n1 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.442 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.700 nvme0n1 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:52.700 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.701 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.960 nvme0n1 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.960 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.218 nvme0n1 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.218 12:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.476 nvme0n1 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.476 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.734 nvme0n1 00:24:53.734 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.734 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.734 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.735 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.993 nvme0n1 00:24:53.993 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.993 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.993 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.993 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.993 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.993 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.251 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.509 nvme0n1 00:24:54.509 12:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.509 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.510 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.510 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.510 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.510 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.510 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.510 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.768 nvme0n1 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.768 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.027 nvme0n1 00:24:55.027 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.027 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.027 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.027 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.027 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.027 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:55.285 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.286 12:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.543 nvme0n1 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.543 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.544 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.110 nvme0n1 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:56.110 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.111 12:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.677 nvme0n1 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.677 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.678 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.244 nvme0n1 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.244 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.245 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:57.245 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.245 12:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.811 nvme0n1 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.811 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.378 nvme0n1 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.378 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.379 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.379 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.379 12:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.312 nvme0n1 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:59.312 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.313 12:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.245 nvme0n1 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.245 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.246 12:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.180 nvme0n1 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.180 12:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.115 nvme0n1 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.115 12:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.050 nvme0n1 00:25:03.050 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.050 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.050 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.050 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.050 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.050 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.050 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.050 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.050 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.051 nvme0n1 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.051 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.309 nvme0n1 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.310 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.567 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.568 nvme0n1 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.568 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.828 nvme0n1 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.828 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.115 nvme0n1 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.115 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.377 nvme0n1 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.377 12:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.636 nvme0n1 00:25:04.636 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.636 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.636 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.636 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.636 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.636 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.637 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.895 nvme0n1 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.895 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.153 nvme0n1 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.153 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.154 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.412 nvme0n1 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.412 12:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.671 nvme0n1 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.671 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.930 nvme0n1 00:25:05.930 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.930 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.930 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.930 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.930 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.930 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.193 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.451 nvme0n1 00:25:06.451 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.451 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.451 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.451 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.451 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.451 12:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.451 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.452 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.709 nvme0n1 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.709 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.967 nvme0n1 00:25:06.967 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.967 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.967 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.967 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.967 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.226 12:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.790 nvme0n1 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.790 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.791 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.356 nvme0n1 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.356 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.357 12:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.923 nvme0n1 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.924 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.490 nvme0n1 00:25:09.490 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.490 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.490 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.490 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.490 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.490 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.491 12:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.748 nvme0n1 00:25:09.748 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.748 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.748 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.748 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.748 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.006 12:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.940 nvme0n1 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:10.940 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.941 12:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.874 nvme0n1 00:25:11.874 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.874 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.874 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.874 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.875 12:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.809 nvme0n1 00:25:12.809 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.809 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.809 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.809 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.810 12:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.744 nvme0n1 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.744 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.745 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.745 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.745 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.745 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.745 12:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.679 nvme0n1 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.679 nvme0n1 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.679 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.937 nvme0n1 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.937 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.938 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.196 nvme0n1 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.196 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.455 nvme0n1 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.455 12:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.455 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.714 nvme0n1 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.714 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.973 nvme0n1 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.973 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.231 nvme0n1 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.231 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.232 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.490 nvme0n1 00:25:16.490 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.490 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.490 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.490 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.490 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.490 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.490 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.490 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.490 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.490 12:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.490 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.490 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.490 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:16.490 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.490 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.490 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.490 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.490 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:16.490 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:16.490 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.491 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.749 nvme0n1 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.749 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.007 nvme0n1 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:25:17.007 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.008 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.265 nvme0n1 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.265 12:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.523 nvme0n1 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.523 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.524 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.781 nvme0n1 00:25:17.781 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.781 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.781 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.781 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.781 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.781 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.039 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.039 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.039 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.039 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.039 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.039 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.039 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:18.039 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.039 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.040 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.298 nvme0n1 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.298 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.299 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.299 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.299 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.299 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.299 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.299 12:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.556 nvme0n1 00:25:18.556 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.556 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.556 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.556 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.556 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.557 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.122 nvme0n1 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.122 12:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.686 nvme0n1 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.686 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.252 nvme0n1 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.252 12:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.817 nvme0n1 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.817 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.382 nvme0n1 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:21.382 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDQ5NjQ3MzgzOTY2NjkxNWQzZDA0MzM4ZWYzNzkzYTQQIjtQ: 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: ]] 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmJiYmRmNzhhMjUyYzdhMGVhN2JhZTMyNjQyZjczYjMyMzZkNDA4OTBlODM5ZWQ0ZWQzYzU1Mzc5MWQwMDcwOMJJ6kY=: 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.383 12:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.316 nvme0n1 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.316 12:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.249 nvme0n1 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.249 12:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.184 nvme0n1 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQzYjdhZjFkMzhiYWYzMDc2MTMyOTAyZjlhNmU0MWE3Y2VmZTRmMzBmNDQwZGU1+7BgpQ==: 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: ]] 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzViNzQ0NGZjYWZmZDkyODdiMDM5ZDRkZDk1NjY4ZjjqITZc: 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.184 12:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.118 nvme0n1 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.118 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NGFiMTVmNGY1NTk2MGIwMDcxZGU0ZGZjZjkxM2EwNjllNDY3ZDZiMjdhMzFkZGViNGE4OTBlZWZlNDI4MzFmN0vY6Ao=: 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.119 12:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.684 nvme0n1 00:25:25.684 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.684 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.684 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.684 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.684 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.943 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.943 request: 00:25:25.943 { 00:25:25.943 "name": "nvme0", 00:25:25.943 "trtype": "tcp", 00:25:25.943 "traddr": "10.0.0.1", 00:25:25.943 "adrfam": "ipv4", 00:25:25.943 "trsvcid": "4420", 00:25:25.943 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:25.943 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:25.943 "prchk_reftag": false, 00:25:25.943 "prchk_guard": false, 00:25:25.943 "hdgst": false, 00:25:25.944 "ddgst": false, 00:25:25.944 "allow_unrecognized_csi": false, 00:25:25.944 "method": "bdev_nvme_attach_controller", 00:25:25.944 "req_id": 1 00:25:25.944 } 00:25:25.944 Got JSON-RPC error response 00:25:25.944 response: 00:25:25.944 { 00:25:25.944 "code": -5, 00:25:25.944 "message": "Input/output error" 00:25:25.944 } 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.944 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.203 request: 00:25:26.203 { 00:25:26.203 "name": "nvme0", 00:25:26.203 "trtype": "tcp", 00:25:26.203 "traddr": "10.0.0.1", 00:25:26.203 "adrfam": "ipv4", 00:25:26.203 "trsvcid": "4420", 00:25:26.203 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:26.203 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:26.203 "prchk_reftag": false, 00:25:26.203 "prchk_guard": false, 00:25:26.203 "hdgst": false, 00:25:26.203 "ddgst": false, 00:25:26.203 "dhchap_key": "key2", 00:25:26.203 "allow_unrecognized_csi": false, 00:25:26.203 "method": "bdev_nvme_attach_controller", 00:25:26.203 "req_id": 1 00:25:26.203 } 00:25:26.203 Got JSON-RPC error response 00:25:26.203 response: 00:25:26.203 { 00:25:26.203 "code": -5, 00:25:26.203 "message": "Input/output error" 00:25:26.203 } 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.203 request: 00:25:26.203 { 00:25:26.203 "name": "nvme0", 00:25:26.203 "trtype": "tcp", 00:25:26.203 "traddr": "10.0.0.1", 00:25:26.203 "adrfam": "ipv4", 00:25:26.203 "trsvcid": "4420", 00:25:26.203 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:26.203 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:26.203 "prchk_reftag": false, 00:25:26.203 "prchk_guard": false, 00:25:26.203 "hdgst": false, 00:25:26.203 "ddgst": false, 00:25:26.203 "dhchap_key": "key1", 00:25:26.203 "dhchap_ctrlr_key": "ckey2", 00:25:26.203 "allow_unrecognized_csi": false, 00:25:26.203 "method": "bdev_nvme_attach_controller", 00:25:26.203 "req_id": 1 00:25:26.203 } 00:25:26.203 Got JSON-RPC error response 00:25:26.203 response: 00:25:26.203 { 00:25:26.203 "code": -5, 00:25:26.203 "message": "Input/output error" 00:25:26.203 } 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:26.203 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.204 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.462 nvme0n1 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:26.462 12:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.462 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.462 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.463 request: 00:25:26.463 { 00:25:26.463 "name": "nvme0", 00:25:26.463 "dhchap_key": "key1", 00:25:26.463 "dhchap_ctrlr_key": "ckey2", 00:25:26.463 "method": "bdev_nvme_set_keys", 00:25:26.463 "req_id": 1 00:25:26.463 } 00:25:26.463 Got JSON-RPC error response 00:25:26.463 response: 00:25:26.463 { 00:25:26.463 "code": -13, 00:25:26.463 "message": "Permission denied" 00:25:26.463 } 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:26.463 12:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:27.836 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.836 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:27.836 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.836 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.836 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.836 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:27.836 12:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2JiNDY2NDEwMzdlMGJjZmViNjRhOWUyZmUzNWE5MTFiZjA2NTUzNzczNmY5N2MxKliHvA==: 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: ]] 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mzc5NDA3ZWI5OGI3MmM0ZGU0ZWQyMDVhMWViYWMyMWFmMGRkODM5YjA2ZDZlMDBjN6PBJg==: 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.796 nvme0n1 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWJhOGFkMmRjZjljN2YyNzIxZDQwNDBjMTEwZjE2M2N3AkZV: 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: ]] 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjJiMzI3YzAyNGUxMjM4NTZkODY5YTYxNzRmMTI0ZmaVcne5: 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.796 request: 00:25:28.796 { 00:25:28.796 "name": "nvme0", 00:25:28.796 "dhchap_key": "key2", 00:25:28.796 "dhchap_ctrlr_key": "ckey1", 00:25:28.796 "method": "bdev_nvme_set_keys", 00:25:28.796 "req_id": 1 00:25:28.796 } 00:25:28.796 Got JSON-RPC error response 00:25:28.796 response: 00:25:28.796 { 00:25:28.796 "code": -13, 00:25:28.796 "message": "Permission denied" 00:25:28.796 } 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.796 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.099 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:29.099 12:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:30.031 rmmod nvme_tcp 00:25:30.031 rmmod nvme_fabrics 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 705416 ']' 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 705416 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 705416 ']' 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 705416 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 705416 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 705416' 00:25:30.031 killing process with pid 705416 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 705416 00:25:30.031 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 705416 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.288 12:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.197 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:32.197 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:32.197 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:32.197 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:32.197 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:32.197 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:32.197 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:32.197 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:32.456 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:32.456 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:32.456 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:32.456 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:32.456 12:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:33.833 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:33.833 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:33.833 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:33.833 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:33.833 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:33.833 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:33.833 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:33.833 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:33.833 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:33.833 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:33.833 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:33.833 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:33.833 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:33.833 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:33.833 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:33.833 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:34.774 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:25:34.774 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pp9 /tmp/spdk.key-null.eR1 /tmp/spdk.key-sha256.XJJ /tmp/spdk.key-sha384.bgG /tmp/spdk.key-sha512.mXR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:34.774 12:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:36.151 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:36.151 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:36.151 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:36.151 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:36.151 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:36.151 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:36.151 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:36.151 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:36.151 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:36.151 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:36.151 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:36.151 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:36.151 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:36.151 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:36.151 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:36.151 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:36.151 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:36.151 00:25:36.151 real 0m51.449s 00:25:36.151 user 0m48.462s 00:25:36.151 sys 0m6.212s 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.151 ************************************ 00:25:36.151 END TEST nvmf_auth_host 00:25:36.151 ************************************ 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.151 ************************************ 00:25:36.151 START TEST nvmf_digest 00:25:36.151 ************************************ 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:36.151 * Looking for test storage... 00:25:36.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:25:36.151 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:36.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.411 --rc genhtml_branch_coverage=1 00:25:36.411 --rc genhtml_function_coverage=1 00:25:36.411 --rc genhtml_legend=1 00:25:36.411 --rc geninfo_all_blocks=1 00:25:36.411 --rc geninfo_unexecuted_blocks=1 00:25:36.411 00:25:36.411 ' 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:36.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.411 --rc genhtml_branch_coverage=1 00:25:36.411 --rc genhtml_function_coverage=1 00:25:36.411 --rc genhtml_legend=1 00:25:36.411 --rc geninfo_all_blocks=1 00:25:36.411 --rc geninfo_unexecuted_blocks=1 00:25:36.411 00:25:36.411 ' 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:36.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.411 --rc genhtml_branch_coverage=1 00:25:36.411 --rc genhtml_function_coverage=1 00:25:36.411 --rc genhtml_legend=1 00:25:36.411 --rc geninfo_all_blocks=1 00:25:36.411 --rc geninfo_unexecuted_blocks=1 00:25:36.411 00:25:36.411 ' 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:36.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.411 --rc genhtml_branch_coverage=1 00:25:36.411 --rc genhtml_function_coverage=1 00:25:36.411 --rc genhtml_legend=1 00:25:36.411 --rc geninfo_all_blocks=1 00:25:36.411 --rc geninfo_unexecuted_blocks=1 00:25:36.411 00:25:36.411 ' 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:36.411 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:36.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:36.412 12:37:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:38.309 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.309 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.309 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.309 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.309 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.309 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:38.310 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:38.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:38.310 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:38.310 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.310 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.568 12:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:25:38.568 00:25:38.568 --- 10.0.0.2 ping statistics --- 00:25:38.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.568 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:25:38.568 00:25:38.568 --- 10.0.0.1 ping statistics --- 00:25:38.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.568 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:38.568 ************************************ 00:25:38.568 START TEST nvmf_digest_clean 00:25:38.568 ************************************ 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=715023 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 715023 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 715023 ']' 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.568 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:38.569 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:38.569 [2024-10-30 12:37:11.108454] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:25:38.569 [2024-10-30 12:37:11.108554] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.569 [2024-10-30 12:37:11.180781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.569 [2024-10-30 12:37:11.234040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.569 [2024-10-30 12:37:11.234116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.569 [2024-10-30 12:37:11.234139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.569 [2024-10-30 12:37:11.234150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.569 [2024-10-30 12:37:11.234159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.569 [2024-10-30 12:37:11.234783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:38.827 null0 00:25:38.827 [2024-10-30 12:37:11.465557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.827 [2024-10-30 12:37:11.489798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=715049 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 715049 /var/tmp/bperf.sock 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 715049 ']' 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:38.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:38.827 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:39.086 [2024-10-30 12:37:11.538154] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:25:39.086 [2024-10-30 12:37:11.538215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715049 ] 00:25:39.086 [2024-10-30 12:37:11.601802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.086 [2024-10-30 12:37:11.658149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.344 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:39.344 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:39.344 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:39.344 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:39.344 12:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:39.602 12:37:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.602 12:37:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:40.167 nvme0n1 00:25:40.167 12:37:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:40.167 12:37:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:40.167 Running I/O for 2 seconds... 00:25:42.471 18477.00 IOPS, 72.18 MiB/s [2024-10-30T11:37:15.153Z] 18829.50 IOPS, 73.55 MiB/s 00:25:42.472 Latency(us) 00:25:42.472 [2024-10-30T11:37:15.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.472 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:42.472 nvme0n1 : 2.01 18840.88 73.60 0.00 0.00 6783.93 3470.98 17670.45 00:25:42.472 [2024-10-30T11:37:15.153Z] =================================================================================================================== 00:25:42.472 [2024-10-30T11:37:15.153Z] Total : 18840.88 73.60 0.00 0.00 6783.93 3470.98 17670.45 00:25:42.472 { 00:25:42.472 "results": [ 00:25:42.472 { 00:25:42.472 "job": "nvme0n1", 00:25:42.472 "core_mask": "0x2", 00:25:42.472 "workload": "randread", 00:25:42.472 "status": "finished", 00:25:42.472 "queue_depth": 128, 00:25:42.472 "io_size": 4096, 00:25:42.472 "runtime": 2.006116, 00:25:42.472 "iops": 18840.884574969743, 00:25:42.472 "mibps": 73.59720537097556, 00:25:42.472 "io_failed": 0, 00:25:42.472 "io_timeout": 0, 00:25:42.472 "avg_latency_us": 6783.9343618688135, 00:25:42.472 "min_latency_us": 3470.9807407407407, 00:25:42.472 "max_latency_us": 17670.447407407406 00:25:42.472 } 00:25:42.472 ], 00:25:42.472 "core_count": 1 00:25:42.472 } 00:25:42.472 12:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:42.472 12:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:42.472 12:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:42.472 12:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:42.472 | select(.opcode=="crc32c") 00:25:42.472 | "\(.module_name) \(.executed)"' 00:25:42.472 12:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:42.472 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:42.472 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:42.472 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:42.472 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:42.472 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 715049 00:25:42.472 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 715049 ']' 00:25:42.472 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 715049 00:25:42.472 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:42.472 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:42.472 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 715049 00:25:42.729 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:42.729 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:42.729 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 715049' 00:25:42.729 killing process with pid 715049 00:25:42.729 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 715049 00:25:42.729 Received shutdown signal, test time was about 2.000000 seconds 00:25:42.729 00:25:42.729 Latency(us) 00:25:42.729 [2024-10-30T11:37:15.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.729 [2024-10-30T11:37:15.411Z] =================================================================================================================== 00:25:42.730 [2024-10-30T11:37:15.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 715049 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=715568 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 715568 /var/tmp/bperf.sock 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 715568 ']' 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:42.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:42.730 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:42.988 [2024-10-30 12:37:15.428933] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:25:42.988 [2024-10-30 12:37:15.429027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715568 ] 00:25:42.988 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:42.988 Zero copy mechanism will not be used. 00:25:42.988 [2024-10-30 12:37:15.496411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.988 [2024-10-30 12:37:15.552452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.988 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:42.988 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:42.988 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:42.988 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:42.988 12:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:43.552 12:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.552 12:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.810 nvme0n1 00:25:43.810 12:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:43.810 12:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:44.067 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:44.067 Zero copy mechanism will not be used. 00:25:44.067 Running I/O for 2 seconds... 00:25:45.932 5330.00 IOPS, 666.25 MiB/s [2024-10-30T11:37:18.613Z] 5279.00 IOPS, 659.88 MiB/s 00:25:45.932 Latency(us) 00:25:45.932 [2024-10-30T11:37:18.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.933 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:45.933 nvme0n1 : 2.00 5281.14 660.14 0.00 0.00 3025.35 843.47 5218.61 00:25:45.933 [2024-10-30T11:37:18.614Z] =================================================================================================================== 00:25:45.933 [2024-10-30T11:37:18.614Z] Total : 5281.14 660.14 0.00 0.00 3025.35 843.47 5218.61 00:25:45.933 { 00:25:45.933 "results": [ 00:25:45.933 { 00:25:45.933 "job": "nvme0n1", 00:25:45.933 "core_mask": "0x2", 00:25:45.933 "workload": "randread", 00:25:45.933 "status": "finished", 00:25:45.933 "queue_depth": 16, 00:25:45.933 "io_size": 131072, 00:25:45.933 "runtime": 2.002221, 00:25:45.933 "iops": 5281.1352992501825, 00:25:45.933 "mibps": 660.1419124062728, 00:25:45.933 "io_failed": 0, 00:25:45.933 "io_timeout": 0, 00:25:45.933 "avg_latency_us": 3025.3514697826254, 00:25:45.933 "min_latency_us": 843.4725925925926, 00:25:45.933 "max_latency_us": 5218.607407407408 00:25:45.933 } 00:25:45.933 ], 00:25:45.933 "core_count": 1 00:25:45.933 } 00:25:45.933 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:45.933 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:45.933 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:45.933 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:45.933 | select(.opcode=="crc32c") 00:25:45.933 | "\(.module_name) \(.executed)"' 00:25:45.933 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 715568 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 715568 ']' 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 715568 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 715568 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 715568' 00:25:46.499 killing process with pid 715568 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 715568 00:25:46.499 Received shutdown signal, test time was about 2.000000 seconds 00:25:46.499 00:25:46.499 Latency(us) 00:25:46.499 [2024-10-30T11:37:19.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.499 [2024-10-30T11:37:19.180Z] =================================================================================================================== 00:25:46.499 [2024-10-30T11:37:19.180Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:46.499 12:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 715568 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=715983 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 715983 /var/tmp/bperf.sock 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 715983 ']' 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:46.499 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.757 [2024-10-30 12:37:19.198719] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:25:46.757 [2024-10-30 12:37:19.198815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715983 ] 00:25:46.757 [2024-10-30 12:37:19.264336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.757 [2024-10-30 12:37:19.318971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.757 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:46.757 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:46.757 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:46.757 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:46.758 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:47.323 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.323 12:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.580 nvme0n1 00:25:47.580 12:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:47.580 12:37:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:47.838 Running I/O for 2 seconds... 00:25:49.701 21088.00 IOPS, 82.38 MiB/s [2024-10-30T11:37:22.382Z] 20564.00 IOPS, 80.33 MiB/s 00:25:49.701 Latency(us) 00:25:49.701 [2024-10-30T11:37:22.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.701 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:49.701 nvme0n1 : 2.01 20562.59 80.32 0.00 0.00 6210.93 2706.39 9223.59 00:25:49.701 [2024-10-30T11:37:22.382Z] =================================================================================================================== 00:25:49.701 [2024-10-30T11:37:22.382Z] Total : 20562.59 80.32 0.00 0.00 6210.93 2706.39 9223.59 00:25:49.701 { 00:25:49.701 "results": [ 00:25:49.701 { 00:25:49.701 "job": "nvme0n1", 00:25:49.701 "core_mask": "0x2", 00:25:49.701 "workload": "randwrite", 00:25:49.701 "status": "finished", 00:25:49.701 "queue_depth": 128, 00:25:49.701 "io_size": 4096, 00:25:49.701 "runtime": 2.007918, 00:25:49.701 "iops": 20562.592695518444, 00:25:49.701 "mibps": 80.32262771686892, 00:25:49.701 "io_failed": 0, 00:25:49.701 "io_timeout": 0, 00:25:49.701 "avg_latency_us": 6210.9261420769735, 00:25:49.701 "min_latency_us": 2706.394074074074, 00:25:49.701 "max_latency_us": 9223.585185185186 00:25:49.701 } 00:25:49.701 ], 00:25:49.701 "core_count": 1 00:25:49.701 } 00:25:49.957 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:49.957 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:49.957 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:49.957 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:49.957 | select(.opcode=="crc32c") 00:25:49.957 | "\(.module_name) \(.executed)"' 00:25:49.957 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 715983 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 715983 ']' 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 715983 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 715983 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 715983' 00:25:50.215 killing process with pid 715983 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 715983 00:25:50.215 Received shutdown signal, test time was about 2.000000 seconds 00:25:50.215 00:25:50.215 Latency(us) 00:25:50.215 [2024-10-30T11:37:22.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.215 [2024-10-30T11:37:22.896Z] =================================================================================================================== 00:25:50.215 [2024-10-30T11:37:22.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.215 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 715983 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=716395 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 716395 /var/tmp/bperf.sock 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 716395 ']' 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:50.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:50.471 12:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:50.471 [2024-10-30 12:37:22.975741] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:25:50.471 [2024-10-30 12:37:22.975834] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716395 ] 00:25:50.471 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:50.471 Zero copy mechanism will not be used. 00:25:50.471 [2024-10-30 12:37:23.041395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.471 [2024-10-30 12:37:23.095374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.728 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:50.728 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:50.728 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:50.728 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:50.728 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:50.984 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:50.984 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.546 nvme0n1 00:25:51.546 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:51.546 12:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:51.546 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:51.546 Zero copy mechanism will not be used. 00:25:51.546 Running I/O for 2 seconds... 00:25:53.408 6585.00 IOPS, 823.12 MiB/s [2024-10-30T11:37:26.089Z] 6421.50 IOPS, 802.69 MiB/s 00:25:53.408 Latency(us) 00:25:53.408 [2024-10-30T11:37:26.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.408 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:53.408 nvme0n1 : 2.00 6415.81 801.98 0.00 0.00 2483.58 1699.08 6747.78 00:25:53.408 [2024-10-30T11:37:26.089Z] =================================================================================================================== 00:25:53.408 [2024-10-30T11:37:26.089Z] Total : 6415.81 801.98 0.00 0.00 2483.58 1699.08 6747.78 00:25:53.408 { 00:25:53.408 "results": [ 00:25:53.408 { 00:25:53.408 "job": "nvme0n1", 00:25:53.408 "core_mask": "0x2", 00:25:53.408 "workload": "randwrite", 00:25:53.408 "status": "finished", 00:25:53.408 "queue_depth": 16, 00:25:53.408 "io_size": 131072, 00:25:53.408 "runtime": 2.004891, 00:25:53.408 "iops": 6415.81013631165, 00:25:53.409 "mibps": 801.9762670389563, 00:25:53.409 "io_failed": 0, 00:25:53.409 "io_timeout": 0, 00:25:53.409 "avg_latency_us": 2483.58222245257, 00:25:53.409 "min_latency_us": 1699.0814814814814, 00:25:53.409 "max_latency_us": 6747.780740740741 00:25:53.409 } 00:25:53.409 ], 00:25:53.409 "core_count": 1 00:25:53.409 } 00:25:53.409 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:53.409 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:53.409 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:53.409 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:53.409 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:53.409 | select(.opcode=="crc32c") 00:25:53.409 | "\(.module_name) \(.executed)"' 00:25:53.665 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:53.665 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:53.665 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:53.665 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:53.665 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 716395 00:25:53.665 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 716395 ']' 00:25:53.665 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 716395 00:25:53.665 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:53.665 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:53.665 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 716395 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 716395' 00:25:53.922 killing process with pid 716395 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 716395 00:25:53.922 Received shutdown signal, test time was about 2.000000 seconds 00:25:53.922 00:25:53.922 Latency(us) 00:25:53.922 [2024-10-30T11:37:26.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.922 [2024-10-30T11:37:26.603Z] =================================================================================================================== 00:25:53.922 [2024-10-30T11:37:26.603Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 716395 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 715023 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 715023 ']' 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 715023 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:53.922 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 715023 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 715023' 00:25:54.181 killing process with pid 715023 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 715023 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 715023 00:25:54.181 00:25:54.181 real 0m15.774s 00:25:54.181 user 0m30.392s 00:25:54.181 sys 0m4.745s 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:54.181 ************************************ 00:25:54.181 END TEST nvmf_digest_clean 00:25:54.181 ************************************ 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:54.181 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:54.439 ************************************ 00:25:54.439 START TEST nvmf_digest_error 00:25:54.439 ************************************ 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=716946 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 716946 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 716946 ']' 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:54.439 12:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.439 [2024-10-30 12:37:26.947449] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:25:54.439 [2024-10-30 12:37:26.947530] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.439 [2024-10-30 12:37:27.018575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.439 [2024-10-30 12:37:27.075038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.439 [2024-10-30 12:37:27.075093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.439 [2024-10-30 12:37:27.075107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.439 [2024-10-30 12:37:27.075117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.439 [2024-10-30 12:37:27.075127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.439 [2024-10-30 12:37:27.075750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 [2024-10-30 12:37:27.208476] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.698 null0 00:25:54.698 [2024-10-30 12:37:27.324971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.698 [2024-10-30 12:37:27.349210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=716971 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 716971 /var/tmp/bperf.sock 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 716971 ']' 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:54.698 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.956 [2024-10-30 12:37:27.397628] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:25:54.956 [2024-10-30 12:37:27.397693] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716971 ] 00:25:54.956 [2024-10-30 12:37:27.462581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.956 [2024-10-30 12:37:27.522100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.213 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:55.213 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:55.213 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.213 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.472 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:55.472 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.472 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.472 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.472 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.472 12:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.770 nvme0n1 00:25:55.770 12:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:55.770 12:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.770 12:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.770 12:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.770 12:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:55.770 12:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.051 Running I/O for 2 seconds... 00:25:56.051 [2024-10-30 12:37:28.475914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.475964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.475983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.492288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.492320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.492336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.502616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.502648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.502664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.517668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.517698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.517713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.533970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.534000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.534016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.547400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.547430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.547447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.560156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.560185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.560214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.577207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.577254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.577282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.587298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.587329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.587345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.602948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.602977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.602993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.618620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.618651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.618667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.631681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.631712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.631729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.645987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.646018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.646034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.662177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.662207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.662223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.677966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.051 [2024-10-30 12:37:28.677997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.051 [2024-10-30 12:37:28.678013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.051 [2024-10-30 12:37:28.692905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.052 [2024-10-30 12:37:28.692940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.052 [2024-10-30 12:37:28.692957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.052 [2024-10-30 12:37:28.703499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.052 [2024-10-30 12:37:28.703529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.052 [2024-10-30 12:37:28.703561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.052 [2024-10-30 12:37:28.719278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.052 [2024-10-30 12:37:28.719310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.052 [2024-10-30 12:37:28.719327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.052 [2024-10-30 12:37:28.733213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.052 [2024-10-30 12:37:28.733266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.052 [2024-10-30 12:37:28.733286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.746490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.746539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.746558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.759868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.759898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.759913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.773200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.773229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.773268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.786710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.786741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.786758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.798136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.798181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.798203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.810845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.810889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.810905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.823555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.823611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.823627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.836186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.836232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.836248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.849428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.849457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.849473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.863880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.863909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.863925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.878818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.878850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.878867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.894208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.894239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.894263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.910089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.910121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.910138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.923004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.923040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.923057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.935616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.935662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.935678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.949807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.949838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.949854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.963870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.963902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.963918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.976766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.976797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.976813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.311 [2024-10-30 12:37:28.990890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.311 [2024-10-30 12:37:28.990923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.311 [2024-10-30 12:37:28.990941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.571 [2024-10-30 12:37:29.003732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.571 [2024-10-30 12:37:29.003764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.571 [2024-10-30 12:37:29.003781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.571 [2024-10-30 12:37:29.020560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.571 [2024-10-30 12:37:29.020590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.571 [2024-10-30 12:37:29.020606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.571 [2024-10-30 12:37:29.034786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.571 [2024-10-30 12:37:29.034816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.571 [2024-10-30 12:37:29.034832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.571 [2024-10-30 12:37:29.050276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.571 [2024-10-30 12:37:29.050309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.571 [2024-10-30 12:37:29.050326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.571 [2024-10-30 12:37:29.065905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.571 [2024-10-30 12:37:29.065936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.571 [2024-10-30 12:37:29.065953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.571 [2024-10-30 12:37:29.083001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.571 [2024-10-30 12:37:29.083030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.571 [2024-10-30 12:37:29.083045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.571 [2024-10-30 12:37:29.093223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.571 [2024-10-30 12:37:29.093273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.571 [2024-10-30 12:37:29.093290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.571 [2024-10-30 12:37:29.107895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.571 [2024-10-30 12:37:29.107924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.571 [2024-10-30 12:37:29.107940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.571 [2024-10-30 12:37:29.124810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.571 [2024-10-30 12:37:29.124839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.571 [2024-10-30 12:37:29.124854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.571 [2024-10-30 12:37:29.139767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.571 [2024-10-30 12:37:29.139798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.571 [2024-10-30 12:37:29.139815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.572 [2024-10-30 12:37:29.151770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.572 [2024-10-30 12:37:29.151799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.572 [2024-10-30 12:37:29.151815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.572 [2024-10-30 12:37:29.166838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.572 [2024-10-30 12:37:29.166867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.572 [2024-10-30 12:37:29.166887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.572 [2024-10-30 12:37:29.182745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.572 [2024-10-30 12:37:29.182777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.572 [2024-10-30 12:37:29.182794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.572 [2024-10-30 12:37:29.197997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.572 [2024-10-30 12:37:29.198026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.572 [2024-10-30 12:37:29.198042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.572 [2024-10-30 12:37:29.212526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.572 [2024-10-30 12:37:29.212571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.572 [2024-10-30 12:37:29.212593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.572 [2024-10-30 12:37:29.226121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.572 [2024-10-30 12:37:29.226167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.572 [2024-10-30 12:37:29.226184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.572 [2024-10-30 12:37:29.240186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.572 [2024-10-30 12:37:29.240231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.572 [2024-10-30 12:37:29.240249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.572 [2024-10-30 12:37:29.251971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.572 [2024-10-30 12:37:29.252003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.572 [2024-10-30 12:37:29.252022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.268729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.268760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.268776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.283900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.283931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.283947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.297193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.297228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.297271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.312292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.312323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.312340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.322853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.322881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.322896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.338432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.338461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.338477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.351466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.351495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.351510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.361796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.361826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.361842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.376946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.376976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.376992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.391667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.391696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.391712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.402728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.402756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.830 [2024-10-30 12:37:29.402771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.830 [2024-10-30 12:37:29.415955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.830 [2024-10-30 12:37:29.415985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.831 [2024-10-30 12:37:29.416001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.831 [2024-10-30 12:37:29.431381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.831 [2024-10-30 12:37:29.431414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.831 [2024-10-30 12:37:29.431431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.831 [2024-10-30 12:37:29.446392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.831 [2024-10-30 12:37:29.446421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.831 [2024-10-30 12:37:29.446437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.831 18002.00 IOPS, 70.32 MiB/s [2024-10-30T11:37:29.512Z] [2024-10-30 12:37:29.458664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.831 [2024-10-30 12:37:29.458692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.831 [2024-10-30 12:37:29.458707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.831 [2024-10-30 12:37:29.472898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.831 [2024-10-30 12:37:29.472926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.831 [2024-10-30 12:37:29.472940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.831 [2024-10-30 12:37:29.487264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.831 [2024-10-30 12:37:29.487293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.831 [2024-10-30 12:37:29.487308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.831 [2024-10-30 12:37:29.502984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:56.831 [2024-10-30 12:37:29.503013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.831 [2024-10-30 12:37:29.503028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.091 [2024-10-30 12:37:29.519371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.091 [2024-10-30 12:37:29.519402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.091 [2024-10-30 12:37:29.519418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.091 [2024-10-30 12:37:29.533802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.091 [2024-10-30 12:37:29.533840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.091 [2024-10-30 12:37:29.533858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.091 [2024-10-30 12:37:29.549440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.091 [2024-10-30 12:37:29.549471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.091 [2024-10-30 12:37:29.549487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.091 [2024-10-30 12:37:29.563443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.091 [2024-10-30 12:37:29.563475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.091 [2024-10-30 12:37:29.563492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.091 [2024-10-30 12:37:29.575412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.091 [2024-10-30 12:37:29.575441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.091 [2024-10-30 12:37:29.575457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.091 [2024-10-30 12:37:29.587485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.091 [2024-10-30 12:37:29.587515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.091 [2024-10-30 12:37:29.587531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.091 [2024-10-30 12:37:29.600919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.091 [2024-10-30 12:37:29.600948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.091 [2024-10-30 12:37:29.600965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.091 [2024-10-30 12:37:29.613869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.091 [2024-10-30 12:37:29.613897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.091 [2024-10-30 12:37:29.613913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.091 [2024-10-30 12:37:29.629544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.091 [2024-10-30 12:37:29.629574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.091 [2024-10-30 12:37:29.629605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.091 [2024-10-30 12:37:29.640362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.092 [2024-10-30 12:37:29.640394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.092 [2024-10-30 12:37:29.640412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.092 [2024-10-30 12:37:29.655022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.092 [2024-10-30 12:37:29.655051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.092 [2024-10-30 12:37:29.655065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.092 [2024-10-30 12:37:29.671076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.092 [2024-10-30 12:37:29.671104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.092 [2024-10-30 12:37:29.671119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.092 [2024-10-30 12:37:29.685056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.092 [2024-10-30 12:37:29.685086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.092 [2024-10-30 12:37:29.685103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.092 [2024-10-30 12:37:29.698056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.092 [2024-10-30 12:37:29.698084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.092 [2024-10-30 12:37:29.698100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.092 [2024-10-30 12:37:29.711538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.092 [2024-10-30 12:37:29.711582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.092 [2024-10-30 12:37:29.711600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.092 [2024-10-30 12:37:29.723879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.092 [2024-10-30 12:37:29.723908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.092 [2024-10-30 12:37:29.723925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.092 [2024-10-30 12:37:29.735490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.092 [2024-10-30 12:37:29.735520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.092 [2024-10-30 12:37:29.735536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.092 [2024-10-30 12:37:29.748536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.092 [2024-10-30 12:37:29.748580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.092 [2024-10-30 12:37:29.748596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.092 [2024-10-30 12:37:29.763374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.092 [2024-10-30 12:37:29.763419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.092 [2024-10-30 12:37:29.763441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.778434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.778468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.778486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.789341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.789372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.789388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.805507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.805537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.805553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.821143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.821173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.821189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.831994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.832024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.832041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.845999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.846027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.846043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.859842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.859870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.859886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.872628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.872657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.872674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.888062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.888098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.888116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.900610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.900640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.900656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.913352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.913381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.913397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.927340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.927369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.927385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.941219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.941269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.941286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.954448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.954493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.954510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.970790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.970819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.970835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.983892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.983922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.983938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:29.995448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:29.995480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:29.995503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:30.010523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:30.010585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:30.010615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.352 [2024-10-30 12:37:30.025133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.352 [2024-10-30 12:37:30.025177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.352 [2024-10-30 12:37:30.025195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.610 [2024-10-30 12:37:30.041267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.610 [2024-10-30 12:37:30.041328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.610 [2024-10-30 12:37:30.041348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.053679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.053714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.053732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.069192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.069224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.069263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.084853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.084884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.084907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.100129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.100160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.100177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.114541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.114575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.114607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.126128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.126166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.126182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.141758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.141788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.141804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.156502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.156547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.156565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.167344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.167374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.167390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.183882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.183915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.183934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.199890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.199919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.199950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.210977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.211006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.211023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.224418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.224447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.224463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.239924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.239954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.239970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.254288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.254333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.254351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.267197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.267226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.267263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.281633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.281665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.281697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.611 [2024-10-30 12:37:30.292830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.611 [2024-10-30 12:37:30.292864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.611 [2024-10-30 12:37:30.292882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.868 [2024-10-30 12:37:30.309469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.868 [2024-10-30 12:37:30.309518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.868 [2024-10-30 12:37:30.309535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.868 [2024-10-30 12:37:30.323626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.868 [2024-10-30 12:37:30.323657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.868 [2024-10-30 12:37:30.323687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.868 [2024-10-30 12:37:30.334552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.868 [2024-10-30 12:37:30.334597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.868 [2024-10-30 12:37:30.334614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.868 [2024-10-30 12:37:30.350889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.868 [2024-10-30 12:37:30.350918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.868 [2024-10-30 12:37:30.350934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.869 [2024-10-30 12:37:30.366668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.869 [2024-10-30 12:37:30.366715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-10-30 12:37:30.366744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.869 [2024-10-30 12:37:30.379658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.869 [2024-10-30 12:37:30.379691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-10-30 12:37:30.379709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.869 [2024-10-30 12:37:30.393785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.869 [2024-10-30 12:37:30.393816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-10-30 12:37:30.393844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.869 [2024-10-30 12:37:30.405673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.869 [2024-10-30 12:37:30.405705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-10-30 12:37:30.405722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.869 [2024-10-30 12:37:30.420923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.869 [2024-10-30 12:37:30.420952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-10-30 12:37:30.420972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.869 [2024-10-30 12:37:30.437762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.869 [2024-10-30 12:37:30.437792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-10-30 12:37:30.437807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.869 [2024-10-30 12:37:30.452719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.869 [2024-10-30 12:37:30.452752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-10-30 12:37:30.452769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.869 18184.50 IOPS, 71.03 MiB/s [2024-10-30T11:37:30.550Z] [2024-10-30 12:37:30.463814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2121a90) 00:25:57.869 [2024-10-30 12:37:30.463845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-10-30 12:37:30.463861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.869 00:25:57.869 Latency(us) 00:25:57.869 [2024-10-30T11:37:30.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.869 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:57.869 nvme0n1 : 2.05 17827.15 69.64 0.00 0.00 7031.85 3373.89 47574.28 00:25:57.869 [2024-10-30T11:37:30.550Z] =================================================================================================================== 00:25:57.869 [2024-10-30T11:37:30.550Z] Total : 17827.15 69.64 0.00 0.00 7031.85 3373.89 47574.28 00:25:57.869 { 00:25:57.869 "results": [ 00:25:57.869 { 00:25:57.869 "job": "nvme0n1", 00:25:57.869 "core_mask": "0x2", 00:25:57.869 "workload": "randread", 00:25:57.869 "status": "finished", 00:25:57.869 "queue_depth": 128, 00:25:57.869 "io_size": 4096, 00:25:57.869 "runtime": 2.047271, 00:25:57.869 "iops": 17827.14647938646, 00:25:57.869 "mibps": 69.63729093510337, 00:25:57.869 "io_failed": 0, 00:25:57.869 "io_timeout": 0, 00:25:57.869 "avg_latency_us": 7031.852275143873, 00:25:57.869 "min_latency_us": 3373.8903703703704, 00:25:57.869 "max_latency_us": 47574.281481481485 00:25:57.869 } 00:25:57.869 ], 00:25:57.869 "core_count": 1 00:25:57.869 } 00:25:57.869 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:57.869 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:57.869 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:57.869 | .driver_specific 00:25:57.869 | .nvme_error 00:25:57.869 | .status_code 00:25:57.869 | .command_transient_transport_error' 00:25:57.869 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:58.127 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:25:58.127 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 716971 00:25:58.127 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 716971 ']' 00:25:58.127 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 716971 00:25:58.127 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:25:58.127 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:58.127 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 716971 00:25:58.385 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:58.385 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:58.385 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 716971' 00:25:58.385 killing process with pid 716971 00:25:58.385 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 716971 00:25:58.385 Received shutdown signal, test time was about 2.000000 seconds 00:25:58.385 00:25:58.385 Latency(us) 00:25:58.385 [2024-10-30T11:37:31.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.385 [2024-10-30T11:37:31.066Z] =================================================================================================================== 00:25:58.385 [2024-10-30T11:37:31.066Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.385 12:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 716971 00:25:58.385 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:58.385 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:58.385 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:58.385 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:58.385 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:58.385 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=717385 00:25:58.385 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:58.385 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 717385 /var/tmp/bperf.sock 00:25:58.385 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 717385 ']' 00:25:58.385 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:58.386 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:58.386 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:58.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:58.386 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:58.386 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.644 [2024-10-30 12:37:31.107478] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:25:58.644 [2024-10-30 12:37:31.107578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid717385 ] 00:25:58.644 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:58.644 Zero copy mechanism will not be used. 00:25:58.644 [2024-10-30 12:37:31.176025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.644 [2024-10-30 12:37:31.232984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.902 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:58.902 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:58.902 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:58.902 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:59.159 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:59.159 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.159 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.159 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.159 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.159 12:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.417 nvme0n1 00:25:59.417 12:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:59.417 12:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.417 12:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.417 12:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.417 12:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:59.417 12:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:59.675 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:59.675 Zero copy mechanism will not be used. 00:25:59.675 Running I/O for 2 seconds... 00:25:59.675 [2024-10-30 12:37:32.192328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.675 [2024-10-30 12:37:32.192390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.675 [2024-10-30 12:37:32.192410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.675 [2024-10-30 12:37:32.199128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.675 [2024-10-30 12:37:32.199158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.675 [2024-10-30 12:37:32.199176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.675 [2024-10-30 12:37:32.205640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.675 [2024-10-30 12:37:32.205670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.675 [2024-10-30 12:37:32.205686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.675 [2024-10-30 12:37:32.211998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.675 [2024-10-30 12:37:32.212027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.675 [2024-10-30 12:37:32.212044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.675 [2024-10-30 12:37:32.218512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.675 [2024-10-30 12:37:32.218543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.675 [2024-10-30 12:37:32.218580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.675 [2024-10-30 12:37:32.225112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.675 [2024-10-30 12:37:32.225142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.675 [2024-10-30 12:37:32.225159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.675 [2024-10-30 12:37:32.231847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.675 [2024-10-30 12:37:32.231876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.675 [2024-10-30 12:37:32.231899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.675 [2024-10-30 12:37:32.238645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.675 [2024-10-30 12:37:32.238675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.675 [2024-10-30 12:37:32.238703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.675 [2024-10-30 12:37:32.245649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.675 [2024-10-30 12:37:32.245680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.675 [2024-10-30 12:37:32.245701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.675 [2024-10-30 12:37:32.254058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.675 [2024-10-30 12:37:32.254089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.675 [2024-10-30 12:37:32.254106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.675 [2024-10-30 12:37:32.261856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.261886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.261902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.269862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.269892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.269908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.277939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.277969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.277985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.285716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.285746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.285763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.293656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.293686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.293702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.301511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.301542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.301559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.309230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.309285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.309313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.317551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.317603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.317621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.324592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.324644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.324662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.331648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.331689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.331708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.338889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.338921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.338938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.346865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.346911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.346928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.676 [2024-10-30 12:37:32.354135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.676 [2024-10-30 12:37:32.354168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.676 [2024-10-30 12:37:32.354186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.362422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.362456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.934 [2024-10-30 12:37:32.362478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.370120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.370152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.934 [2024-10-30 12:37:32.370170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.378269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.378321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.934 [2024-10-30 12:37:32.378363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.386040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.386072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.934 [2024-10-30 12:37:32.386104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.393359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.393391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.934 [2024-10-30 12:37:32.393408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.401392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.401423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.934 [2024-10-30 12:37:32.401440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.409011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.409042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.934 [2024-10-30 12:37:32.409074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.416167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.416198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.934 [2024-10-30 12:37:32.416215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.423633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.423679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.934 [2024-10-30 12:37:32.423696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.431319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.431351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.934 [2024-10-30 12:37:32.431368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.934 [2024-10-30 12:37:32.439304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.934 [2024-10-30 12:37:32.439337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.439355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.447119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.447174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.447199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.454429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.454478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.454497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.461535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.461582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.461599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.469020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.469066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.469082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.476639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.476676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.476711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.483367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.483414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.483432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.490538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.490583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.490600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.498679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.498711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.498728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.506353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.506385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.506403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.513871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.513902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.513919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.521437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.521469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.521486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.529420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.529453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.529470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.536583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.536614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.536630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.544636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.544682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.544699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.552295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.552343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.552361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.559299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.559330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.559347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.566631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.566675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.566691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.574477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.574509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.574547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.582179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.582232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.582250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.589488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.589520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.589537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.597462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.597503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.597522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.605118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.605172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.605190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.935 [2024-10-30 12:37:32.612119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:25:59.935 [2024-10-30 12:37:32.612149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.935 [2024-10-30 12:37:32.612166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.619832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.619864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.619896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.627782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.627813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.627830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.635284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.635333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.635351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.642060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.642120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.642140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.649538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.649586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.649604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.657079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.657110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.657128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.664603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.664634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.664665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.671267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.671323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.671343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.678849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.678880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.678911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.686437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.686470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.686489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.693923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.693954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.693972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.194 [2024-10-30 12:37:32.701643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.194 [2024-10-30 12:37:32.701673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.194 [2024-10-30 12:37:32.701690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.709300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.709337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.709354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.714145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.714177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.714194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.721170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.721200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.721216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.729617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.729647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.729663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.737112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.737157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.737174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.743786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.743816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.743833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.751401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.751434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.751466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.757866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.757896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.757929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.764572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.764617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.764638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.771992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.772038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.772054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.779547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.779583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.779602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.787094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.787125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.787142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.793965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.793995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.794011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.801941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.801985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.802001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.809575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.809621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.809637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.817339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.817369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.817385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.824977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.825008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.825026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.831861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.831896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.831912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.839396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.839429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.839446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.846489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.846519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.846537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.853694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.853724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.853740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.861210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.195 [2024-10-30 12:37:32.861241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.195 [2024-10-30 12:37:32.861284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.195 [2024-10-30 12:37:32.868902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.196 [2024-10-30 12:37:32.868932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.196 [2024-10-30 12:37:32.868948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.196 [2024-10-30 12:37:32.876735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.196 [2024-10-30 12:37:32.876770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.196 [2024-10-30 12:37:32.876787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.884277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.884324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.884343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.891436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.891469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.891488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.898972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.899003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.899036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.906584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.906615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.906631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.914724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.914754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.914771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.922400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.922431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.922448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.930027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.930058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.930074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.937422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.937455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.937473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.945054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.945086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.945119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.952758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.952804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.952820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.959944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.959976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.960014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.968335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.968366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.968398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.976574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.976605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.976621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.984433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.984481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.984499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:32.992725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:32.992756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:32.992773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.000498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.000530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.000562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.008404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.008435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.008452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.015793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.015824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.015841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.023305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.023339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.023357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.030908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.030944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.030961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.038518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.038566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.038584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.046145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.046190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.046206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.053993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.054023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.054039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.061448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.061510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.061559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.069052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.069083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.069100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.455 [2024-10-30 12:37:33.076691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.455 [2024-10-30 12:37:33.076722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.455 [2024-10-30 12:37:33.076777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.456 [2024-10-30 12:37:33.081859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.456 [2024-10-30 12:37:33.081933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.456 [2024-10-30 12:37:33.081954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.456 [2024-10-30 12:37:33.088535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.456 [2024-10-30 12:37:33.088581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.456 [2024-10-30 12:37:33.088597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.456 [2024-10-30 12:37:33.096500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.456 [2024-10-30 12:37:33.096532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.456 [2024-10-30 12:37:33.096563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.456 [2024-10-30 12:37:33.103946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.456 [2024-10-30 12:37:33.103975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.456 [2024-10-30 12:37:33.103992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.456 [2024-10-30 12:37:33.110903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.456 [2024-10-30 12:37:33.110937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.456 [2024-10-30 12:37:33.110954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.456 [2024-10-30 12:37:33.118613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.456 [2024-10-30 12:37:33.118646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.456 [2024-10-30 12:37:33.118664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.456 [2024-10-30 12:37:33.126220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.456 [2024-10-30 12:37:33.126275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.456 [2024-10-30 12:37:33.126294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.456 [2024-10-30 12:37:33.133945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.456 [2024-10-30 12:37:33.133983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.456 [2024-10-30 12:37:33.134004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.141815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.141849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.141866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.148633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.148664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.148696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.156058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.156090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.156116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.163704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.163735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.163752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.171064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.171109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.171126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.178473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.178505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.178523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.715 4129.00 IOPS, 516.12 MiB/s [2024-10-30T11:37:33.396Z] [2024-10-30 12:37:33.187520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.187567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.187585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.194138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.194170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.194202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.200766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.200797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.200813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.207516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.207569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.207587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.214321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.214353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.214371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.221152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.221183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.221201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.228861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.228891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.228907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.236445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.236493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.236511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.243530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.243577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.243594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.250336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.250382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.250399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.257897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.257929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.257946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.265427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.265459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.265491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.272686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.272719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.272752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.279880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.279912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.279935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.288215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.288268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.288289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.296181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.715 [2024-10-30 12:37:33.296212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.715 [2024-10-30 12:37:33.296229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.715 [2024-10-30 12:37:33.304077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.304108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.304125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.311099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.311131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.311148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.318223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.318282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.318301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.325683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.325714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.325745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.332973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.333004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.333037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.339799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.339831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.339848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.346535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.346587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.346606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.353175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.353221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.353238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.359875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.359906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.359923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.366578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.366625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.366642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.373056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.373088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.373105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.379845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.379875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.379892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.386480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.386514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.386532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.716 [2024-10-30 12:37:33.393149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.716 [2024-10-30 12:37:33.393182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.716 [2024-10-30 12:37:33.393200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.400900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.400947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.400967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.408206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.408262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.408282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.415234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.415297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.415317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.422373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.422406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.422440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.430509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.430561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.430580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.438783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.438814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.438830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.446192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.446224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.446264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.454291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.454339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.454357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.462346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.462395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.462411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.469987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.470032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.470055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.476968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.476999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.477044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.484447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.484481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.484500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.492192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.492225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.492243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.499401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.499436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.499454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.506592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.506639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.506656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.514193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.514224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.514241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.522183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.522216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.522235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.529641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.529674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.529706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.535962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.535999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.536018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.540438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.540468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.540485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.547356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.547388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.547405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.554738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.554769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.554786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.975 [2024-10-30 12:37:33.562077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.975 [2024-10-30 12:37:33.562108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.975 [2024-10-30 12:37:33.562125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.568754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.568785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.568802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.575364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.575395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.575412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.581978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.582010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.582027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.588520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.588568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.588585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.595377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.595410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.595427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.602018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.602049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.602066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.609040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.609070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.609088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.616511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.616543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.616561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.623449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.623482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.623499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.630150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.630180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.630196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.636674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.636706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.636723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.643421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.643452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.643470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.650098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.650135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.650153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.976 [2024-10-30 12:37:33.656861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:00.976 [2024-10-30 12:37:33.656895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.976 [2024-10-30 12:37:33.656913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.235 [2024-10-30 12:37:33.664119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.235 [2024-10-30 12:37:33.664151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.235 [2024-10-30 12:37:33.664168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.670926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.670970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.670988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.678743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.678774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.678791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.686503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.686535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.686553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.694650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.694681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.694698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.702538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.702584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.702602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.709986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.710018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.710036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.717512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.717545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.717563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.725459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.725491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.725508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.732750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.732780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.732796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.739797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.739828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.739845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.746221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.746253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.746296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.753001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.753033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.753065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.759872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.759902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.759918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.766714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.766744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.766761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.774113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.774144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.774167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.781703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.781749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.781766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.789375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.789409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.789428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.796068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.796101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.796119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.803716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.803747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.803764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.811754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.811787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.811804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.819518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.819566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.819584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.826380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.826413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.826432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.833053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.833084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.833101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.840010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.840048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.840066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.847227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.847265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.847300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.854702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.854748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.854766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.236 [2024-10-30 12:37:33.861526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.236 [2024-10-30 12:37:33.861560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.236 [2024-10-30 12:37:33.861579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.237 [2024-10-30 12:37:33.868337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.237 [2024-10-30 12:37:33.868371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.237 [2024-10-30 12:37:33.868406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.237 [2024-10-30 12:37:33.875193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.237 [2024-10-30 12:37:33.875226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.237 [2024-10-30 12:37:33.875244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.237 [2024-10-30 12:37:33.881961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.237 [2024-10-30 12:37:33.882009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.237 [2024-10-30 12:37:33.882027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.237 [2024-10-30 12:37:33.888868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.237 [2024-10-30 12:37:33.888917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.237 [2024-10-30 12:37:33.888934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.237 [2024-10-30 12:37:33.895454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.237 [2024-10-30 12:37:33.895488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.237 [2024-10-30 12:37:33.895506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.237 [2024-10-30 12:37:33.902754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.237 [2024-10-30 12:37:33.902801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.237 [2024-10-30 12:37:33.902819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.237 [2024-10-30 12:37:33.910132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.237 [2024-10-30 12:37:33.910164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.237 [2024-10-30 12:37:33.910182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.237 [2024-10-30 12:37:33.917913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.237 [2024-10-30 12:37:33.917946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.237 [2024-10-30 12:37:33.917981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:33.925294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:33.925327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:33.925345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:33.932652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:33.932685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:33.932702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:33.940131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:33.940163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:33.940181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:33.947759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:33.947791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:33.947809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:33.955469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:33.955513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:33.955532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:33.963081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:33.963113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:33.963136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:33.970865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:33.970898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:33.970915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:33.978319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:33.978353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:33.978371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:33.986315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:33.986348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:33.986367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:33.994584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:33.994616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:33.994633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.002334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:34.002367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:34.002401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.010552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:34.010584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:34.010601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.018963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:34.018994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:34.019011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.027190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:34.027221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:34.027253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.034943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:34.034976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:34.034994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.042788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:34.042902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:34.042923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.051326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:34.051359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:34.051377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.059229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:34.059288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:34.059321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.066940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:34.066987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:34.067005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.074471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.496 [2024-10-30 12:37:34.074505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.496 [2024-10-30 12:37:34.074523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.496 [2024-10-30 12:37:34.081666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.081699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.081716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.089206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.089254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.089279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.096893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.096927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.096956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.104733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.104765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.104782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.112005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.112038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.112056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.118659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.118690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.118707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.126195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.126242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.126267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.133886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.133917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.133934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.141941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.141975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.141993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.149204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.149251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.149279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.156822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.156855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.156872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.164478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.164517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.164535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.171656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.171689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.497 [2024-10-30 12:37:34.171706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.497 [2024-10-30 12:37:34.178983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.497 [2024-10-30 12:37:34.179043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.755 [2024-10-30 12:37:34.179072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.755 4198.50 IOPS, 524.81 MiB/s [2024-10-30T11:37:34.436Z] [2024-10-30 12:37:34.188598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x172c120) 00:26:01.755 [2024-10-30 12:37:34.188647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.755 [2024-10-30 12:37:34.188666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.755 00:26:01.755 Latency(us) 00:26:01.755 [2024-10-30T11:37:34.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.755 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:01.755 nvme0n1 : 2.01 4195.33 524.42 0.00 0.00 3808.59 983.04 9417.77 00:26:01.755 [2024-10-30T11:37:34.436Z] =================================================================================================================== 00:26:01.755 [2024-10-30T11:37:34.436Z] Total : 4195.33 524.42 0.00 0.00 3808.59 983.04 9417.77 00:26:01.755 { 00:26:01.755 "results": [ 00:26:01.755 { 00:26:01.755 "job": "nvme0n1", 00:26:01.755 "core_mask": "0x2", 00:26:01.755 "workload": "randread", 00:26:01.755 "status": "finished", 00:26:01.755 "queue_depth": 16, 00:26:01.755 "io_size": 131072, 00:26:01.755 "runtime": 2.005327, 00:26:01.755 "iops": 4195.325749865234, 00:26:01.755 "mibps": 524.4157187331542, 00:26:01.755 "io_failed": 0, 00:26:01.755 "io_timeout": 0, 00:26:01.755 "avg_latency_us": 3808.587749294522, 00:26:01.755 "min_latency_us": 983.04, 00:26:01.755 "max_latency_us": 9417.765925925925 00:26:01.755 } 00:26:01.755 ], 00:26:01.755 "core_count": 1 00:26:01.755 } 00:26:01.755 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:01.755 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:01.755 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:01.755 | .driver_specific 00:26:01.755 | .nvme_error 00:26:01.755 | .status_code 00:26:01.755 | .command_transient_transport_error' 00:26:01.755 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 271 > 0 )) 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 717385 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 717385 ']' 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 717385 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 717385 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 717385' 00:26:02.013 killing process with pid 717385 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 717385 00:26:02.013 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.013 00:26:02.013 Latency(us) 00:26:02.013 [2024-10-30T11:37:34.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.013 [2024-10-30T11:37:34.694Z] =================================================================================================================== 00:26:02.013 [2024-10-30T11:37:34.694Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.013 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 717385 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=717910 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 717910 /var/tmp/bperf.sock 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 717910 ']' 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:02.270 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:02.271 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:02.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:02.271 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:02.271 12:37:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.271 [2024-10-30 12:37:34.781350] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:26:02.271 [2024-10-30 12:37:34.781448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid717910 ] 00:26:02.271 [2024-10-30 12:37:34.847583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.271 [2024-10-30 12:37:34.901888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.528 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:02.528 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:02.528 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:02.528 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:02.787 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:02.787 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.787 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.787 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.787 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.787 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.045 nvme0n1 00:26:03.045 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:03.045 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.045 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.045 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.045 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:03.045 12:37:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.304 Running I/O for 2 seconds... 00:26:03.304 [2024-10-30 12:37:35.792382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f6458 00:26:03.304 [2024-10-30 12:37:35.793456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.793497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.804771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f46d0 00:26:03.304 [2024-10-30 12:37:35.805836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.805883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.816501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166eee38 00:26:03.304 [2024-10-30 12:37:35.817508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.817563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.828580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166de8a8 00:26:03.304 [2024-10-30 12:37:35.829298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.829328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.843451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e4140 00:26:03.304 [2024-10-30 12:37:35.845372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.845402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.852044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166ec840 00:26:03.304 [2024-10-30 12:37:35.852998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.853041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.866673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fe2e8 00:26:03.304 [2024-10-30 12:37:35.868129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.868175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.878000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e6fa8 00:26:03.304 [2024-10-30 12:37:35.879402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.879447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.888968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f92c0 00:26:03.304 [2024-10-30 12:37:35.890108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.890138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.900582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e88f8 00:26:03.304 [2024-10-30 12:37:35.901758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.901800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.912669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166de8a8 00:26:03.304 [2024-10-30 12:37:35.913826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.913869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.926792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fdeb0 00:26:03.304 [2024-10-30 12:37:35.928560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.928589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.935164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fd208 00:26:03.304 [2024-10-30 12:37:35.936159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.936203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.949703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f3e60 00:26:03.304 [2024-10-30 12:37:35.951350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.951392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.961833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fa7d8 00:26:03.304 [2024-10-30 12:37:35.963376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.963419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.972293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e73e0 00:26:03.304 [2024-10-30 12:37:35.974000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.974029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:03.304 [2024-10-30 12:37:35.984761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e3498 00:26:03.304 [2024-10-30 12:37:35.985958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.304 [2024-10-30 12:37:35.985989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:35.997070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e5658 00:26:03.563 [2024-10-30 12:37:35.998517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:35.998548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.009188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f96f8 00:26:03.563 [2024-10-30 12:37:36.010658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.010701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.020276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166ed4e8 00:26:03.563 [2024-10-30 12:37:36.021651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.021681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.031864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fb8b8 00:26:03.563 [2024-10-30 12:37:36.033098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.033142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.044286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166de038 00:26:03.563 [2024-10-30 12:37:36.045807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.045853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.055470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e12d8 00:26:03.563 [2024-10-30 12:37:36.056470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.056514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.067539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:03.563 [2024-10-30 12:37:36.068684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.068727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.078729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fc998 00:26:03.563 [2024-10-30 12:37:36.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.079808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.092447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166ff3c8 00:26:03.563 [2024-10-30 12:37:36.094042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.094087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.104849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166ef270 00:26:03.563 [2024-10-30 12:37:36.106551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.106596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.116853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e49b0 00:26:03.563 [2024-10-30 12:37:36.118541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.118585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.124928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f6458 00:26:03.563 [2024-10-30 12:37:36.125822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.125865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.139291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e3060 00:26:03.563 [2024-10-30 12:37:36.140740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.140786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.151460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166ec840 00:26:03.563 [2024-10-30 12:37:36.152926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.152971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.162738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fcdd0 00:26:03.563 [2024-10-30 12:37:36.164021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.563 [2024-10-30 12:37:36.164064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:03.563 [2024-10-30 12:37:36.173252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166efae0 00:26:03.563 [2024-10-30 12:37:36.174419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.564 [2024-10-30 12:37:36.174448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:03.564 [2024-10-30 12:37:36.184948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166ef6a8 00:26:03.564 [2024-10-30 12:37:36.185970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.564 [2024-10-30 12:37:36.186014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:03.564 [2024-10-30 12:37:36.197427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f2510 00:26:03.564 [2024-10-30 12:37:36.198579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.564 [2024-10-30 12:37:36.198609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:03.564 [2024-10-30 12:37:36.209854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f0350 00:26:03.564 [2024-10-30 12:37:36.211226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.564 [2024-10-30 12:37:36.211277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:03.564 [2024-10-30 12:37:36.220985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f96f8 00:26:03.564 [2024-10-30 12:37:36.222053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.564 [2024-10-30 12:37:36.222084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.564 [2024-10-30 12:37:36.234743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f92c0 00:26:03.564 [2024-10-30 12:37:36.236354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.564 [2024-10-30 12:37:36.236384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.564 [2024-10-30 12:37:36.245526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e9e10 00:26:03.822 [2024-10-30 12:37:36.247118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.247150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.257797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fef90 00:26:03.822 [2024-10-30 12:37:36.258975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.259021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.269652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166df988 00:26:03.822 [2024-10-30 12:37:36.270525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.270556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.281131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e49b0 00:26:03.822 [2024-10-30 12:37:36.282360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.282391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.292928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e84c0 00:26:03.822 [2024-10-30 12:37:36.293938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.293982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.304110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166edd58 00:26:03.822 [2024-10-30 12:37:36.304950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.304995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.319044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f4298 00:26:03.822 [2024-10-30 12:37:36.320790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.320834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.327425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166ecc78 00:26:03.822 [2024-10-30 12:37:36.328249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.328301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.339793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fda78 00:26:03.822 [2024-10-30 12:37:36.340829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.340873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.354211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f96f8 00:26:03.822 [2024-10-30 12:37:36.355762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.355811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.364146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f7da8 00:26:03.822 [2024-10-30 12:37:36.365112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.365157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.376402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e4578 00:26:03.822 [2024-10-30 12:37:36.377523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.377569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.388435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fdeb0 00:26:03.822 [2024-10-30 12:37:36.389650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.389695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.400867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f81e0 00:26:03.822 [2024-10-30 12:37:36.402046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.402089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.412046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166de038 00:26:03.822 [2024-10-30 12:37:36.413033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.413075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.424333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fcdd0 00:26:03.822 [2024-10-30 12:37:36.425658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.425702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.436354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e0a68 00:26:03.822 [2024-10-30 12:37:36.437425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.437454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.447649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f8618 00:26:03.822 [2024-10-30 12:37:36.449291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.449321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.457692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166ebb98 00:26:03.822 [2024-10-30 12:37:36.458457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.458507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.472219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166efae0 00:26:03.822 [2024-10-30 12:37:36.473662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.473706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.483173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e38d0 00:26:03.822 [2024-10-30 12:37:36.484422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.484452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:03.822 [2024-10-30 12:37:36.495146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166feb58 00:26:03.822 [2024-10-30 12:37:36.496317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.822 [2024-10-30 12:37:36.496362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:04.079 [2024-10-30 12:37:36.507440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e5ec8 00:26:04.079 [2024-10-30 12:37:36.508249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.079 [2024-10-30 12:37:36.508289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:04.079 [2024-10-30 12:37:36.518749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166fa3a0 00:26:04.080 [2024-10-30 12:37:36.519405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.519437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.532154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f9b30 00:26:04.080 [2024-10-30 12:37:36.533461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.533506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.543453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e84c0 00:26:04.080 [2024-10-30 12:37:36.544820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.544865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.555546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f31b8 00:26:04.080 [2024-10-30 12:37:36.556550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.556580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.566387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e5658 00:26:04.080 [2024-10-30 12:37:36.567574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.567603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.578191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e4140 00:26:04.080 [2024-10-30 12:37:36.579125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.579169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.589813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f31b8 00:26:04.080 [2024-10-30 12:37:36.590878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.590922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.603695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166eaab8 00:26:04.080 [2024-10-30 12:37:36.605009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.605054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.614632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166e0630 00:26:04.080 [2024-10-30 12:37:36.615892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.615937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.625917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166df118 00:26:04.080 [2024-10-30 12:37:36.627096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.627126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.640570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.080 [2024-10-30 12:37:36.640901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.640946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.655042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.080 [2024-10-30 12:37:36.655368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.655399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.669433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.080 [2024-10-30 12:37:36.669753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.669797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.683760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.080 [2024-10-30 12:37:36.684066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.684110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.698094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.080 [2024-10-30 12:37:36.698373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.698418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.712435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.080 [2024-10-30 12:37:36.712711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.712755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.726663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.080 [2024-10-30 12:37:36.726938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.726981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.740473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.080 [2024-10-30 12:37:36.740734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.740761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.080 [2024-10-30 12:37:36.754778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.080 [2024-10-30 12:37:36.755083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.080 [2024-10-30 12:37:36.755128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.337 [2024-10-30 12:37:36.768696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.337 [2024-10-30 12:37:36.768931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.337 [2024-10-30 12:37:36.768962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.337 20828.00 IOPS, 81.36 MiB/s [2024-10-30T11:37:37.018Z] [2024-10-30 12:37:36.782844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.337 [2024-10-30 12:37:36.783125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.337 [2024-10-30 12:37:36.783170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.337 [2024-10-30 12:37:36.797078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.337 [2024-10-30 12:37:36.797452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.337 [2024-10-30 12:37:36.797481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.337 [2024-10-30 12:37:36.811266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.337 [2024-10-30 12:37:36.811521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.337 [2024-10-30 12:37:36.811551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.337 [2024-10-30 12:37:36.825613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.337 [2024-10-30 12:37:36.825882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.337 [2024-10-30 12:37:36.825925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.337 [2024-10-30 12:37:36.839737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.337 [2024-10-30 12:37:36.840006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.337 [2024-10-30 12:37:36.840033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.337 [2024-10-30 12:37:36.853925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.337 [2024-10-30 12:37:36.854208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.337 [2024-10-30 12:37:36.854251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.337 [2024-10-30 12:37:36.867782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:36.868086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:36.868129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.338 [2024-10-30 12:37:36.881863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:36.882130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:36.882161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.338 [2024-10-30 12:37:36.896092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:36.896419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:36.896463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.338 [2024-10-30 12:37:36.910552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:36.910860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:36.910905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.338 [2024-10-30 12:37:36.924911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:36.925172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:36.925215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.338 [2024-10-30 12:37:36.939082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:36.939398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:36.939442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.338 [2024-10-30 12:37:36.953340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:36.953600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:36.953627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.338 [2024-10-30 12:37:36.967524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:36.967765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:36.967808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.338 [2024-10-30 12:37:36.982018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:36.982324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:36.982368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.338 [2024-10-30 12:37:36.996187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:36.996466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:36.996496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.338 [2024-10-30 12:37:37.010448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.338 [2024-10-30 12:37:37.010726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.338 [2024-10-30 12:37:37.010770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.024482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.024780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.024827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.038596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.038852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.038881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.052773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.053151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.053196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.066933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.067222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.067272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.081213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.081508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.081539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.095562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.095840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.095870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.109614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.109886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.109931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.124027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.124352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.124382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.138143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.138462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.138493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.152351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.152601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.152629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.166542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.166865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.166901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.180895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.181211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.181266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.195284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.195551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.195581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.209527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.209802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.209845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.223766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.224085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.596 [2024-10-30 12:37:37.224129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.596 [2024-10-30 12:37:37.238022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.596 [2024-10-30 12:37:37.238351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.597 [2024-10-30 12:37:37.238381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.597 [2024-10-30 12:37:37.252176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.597 [2024-10-30 12:37:37.252459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.597 [2024-10-30 12:37:37.252488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.597 [2024-10-30 12:37:37.266485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.597 [2024-10-30 12:37:37.266806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.597 [2024-10-30 12:37:37.266850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.280531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.280845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.280877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.294783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.295130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.295162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.309149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.309482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.309519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.323405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.323733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.323779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.337529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.337867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.337896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.351961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.352273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.352309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.366139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.366352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.366382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.380339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.380654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.380683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.394483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.394725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.394756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.408515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.408789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.408818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.422777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.423090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.423134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.437013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.437348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.437378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.451221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.451495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.451525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.465467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.465784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.465814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.479794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.480131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.480160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.855 [2024-10-30 12:37:37.493976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.855 [2024-10-30 12:37:37.494277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.855 [2024-10-30 12:37:37.494308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.856 [2024-10-30 12:37:37.508244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.856 [2024-10-30 12:37:37.508676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.856 [2024-10-30 12:37:37.508706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.856 [2024-10-30 12:37:37.522646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.856 [2024-10-30 12:37:37.522953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.856 [2024-10-30 12:37:37.522982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:04.856 [2024-10-30 12:37:37.536833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:04.856 [2024-10-30 12:37:37.537071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.856 [2024-10-30 12:37:37.537110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.550668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.114 [2024-10-30 12:37:37.550994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.114 [2024-10-30 12:37:37.551041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.564854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.114 [2024-10-30 12:37:37.565131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.114 [2024-10-30 12:37:37.565176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.579221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.114 [2024-10-30 12:37:37.579474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.114 [2024-10-30 12:37:37.579503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.593397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.114 [2024-10-30 12:37:37.593670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.114 [2024-10-30 12:37:37.593715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.607537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.114 [2024-10-30 12:37:37.607931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.114 [2024-10-30 12:37:37.607976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.621690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.114 [2024-10-30 12:37:37.621961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.114 [2024-10-30 12:37:37.622007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.635921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.114 [2024-10-30 12:37:37.636271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.114 [2024-10-30 12:37:37.636301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.650107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.114 [2024-10-30 12:37:37.650438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.114 [2024-10-30 12:37:37.650468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.664251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.114 [2024-10-30 12:37:37.664599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.114 [2024-10-30 12:37:37.664629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.678546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.114 [2024-10-30 12:37:37.678818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.114 [2024-10-30 12:37:37.678847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.114 [2024-10-30 12:37:37.692812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.115 [2024-10-30 12:37:37.693139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.115 [2024-10-30 12:37:37.693185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.115 [2024-10-30 12:37:37.707048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.115 [2024-10-30 12:37:37.707391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.115 [2024-10-30 12:37:37.707421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.115 [2024-10-30 12:37:37.721231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.115 [2024-10-30 12:37:37.721596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.115 [2024-10-30 12:37:37.721625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.115 [2024-10-30 12:37:37.735418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.115 [2024-10-30 12:37:37.735722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.115 [2024-10-30 12:37:37.735750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.115 [2024-10-30 12:37:37.749664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.115 [2024-10-30 12:37:37.749972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.115 [2024-10-30 12:37:37.750001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.115 [2024-10-30 12:37:37.763876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.115 [2024-10-30 12:37:37.764206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.115 [2024-10-30 12:37:37.764251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.115 [2024-10-30 12:37:37.778128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754210) with pdu=0x2000166f5be8 00:26:05.115 [2024-10-30 12:37:37.778425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.115 [2024-10-30 12:37:37.778455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.115 19402.50 IOPS, 75.79 MiB/s 00:26:05.115 Latency(us) 00:26:05.115 [2024-10-30T11:37:37.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.115 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:05.115 nvme0n1 : 2.01 19399.97 75.78 0.00 0.00 6582.26 2669.99 15243.19 00:26:05.115 [2024-10-30T11:37:37.796Z] =================================================================================================================== 00:26:05.115 [2024-10-30T11:37:37.796Z] Total : 19399.97 75.78 0.00 0.00 6582.26 2669.99 15243.19 00:26:05.115 { 00:26:05.115 "results": [ 00:26:05.115 { 00:26:05.115 "job": "nvme0n1", 00:26:05.115 "core_mask": "0x2", 00:26:05.115 "workload": "randwrite", 00:26:05.115 "status": "finished", 00:26:05.115 "queue_depth": 128, 00:26:05.115 "io_size": 4096, 00:26:05.115 "runtime": 2.008921, 00:26:05.115 "iops": 19399.96644965133, 00:26:05.115 "mibps": 75.7811189439505, 00:26:05.115 "io_failed": 0, 00:26:05.115 "io_timeout": 0, 00:26:05.115 "avg_latency_us": 6582.261190434784, 00:26:05.115 "min_latency_us": 2669.9851851851854, 00:26:05.115 "max_latency_us": 15243.188148148149 00:26:05.115 } 00:26:05.115 ], 00:26:05.115 "core_count": 1 00:26:05.115 } 00:26:05.373 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:05.373 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:05.373 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:05.373 | .driver_specific 00:26:05.373 | .nvme_error 00:26:05.373 | .status_code 00:26:05.373 | .command_transient_transport_error' 00:26:05.373 12:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 717910 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 717910 ']' 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 717910 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 717910 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 717910' 00:26:05.632 killing process with pid 717910 00:26:05.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 717910 00:26:05.632 Received shutdown signal, test time was about 2.000000 seconds 00:26:05.632 00:26:05.632 Latency(us) 00:26:05.632 [2024-10-30T11:37:38.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.632 [2024-10-30T11:37:38.313Z] =================================================================================================================== 00:26:05.632 [2024-10-30T11:37:38.313Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.633 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 717910 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=718317 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 718317 /var/tmp/bperf.sock 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 718317 ']' 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:05.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:05.891 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:05.891 [2024-10-30 12:37:38.408111] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:26:05.891 [2024-10-30 12:37:38.408205] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718317 ] 00:26:05.891 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:05.891 Zero copy mechanism will not be used. 00:26:05.891 [2024-10-30 12:37:38.475133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.891 [2024-10-30 12:37:38.532408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.148 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:06.148 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:06.148 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:06.149 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:06.407 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:06.407 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.407 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.407 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.407 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.407 12:37:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.976 nvme0n1 00:26:06.976 12:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:06.976 12:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.976 12:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.976 12:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.976 12:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:06.976 12:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:06.976 Zero copy mechanism will not be used. 00:26:06.976 Running I/O for 2 seconds... 00:26:06.976 [2024-10-30 12:37:39.531067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.976 [2024-10-30 12:37:39.531489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.976 [2024-10-30 12:37:39.531543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.976 [2024-10-30 12:37:39.536513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.976 [2024-10-30 12:37:39.536863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.976 [2024-10-30 12:37:39.536896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.976 [2024-10-30 12:37:39.542541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.976 [2024-10-30 12:37:39.542949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.976 [2024-10-30 12:37:39.542995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.549029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.549432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.549478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.555564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.555866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.555897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.561396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.561779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.561809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.566825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.567124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.567155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.572018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.572420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.572465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.577746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.578079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.578108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.584332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.584630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.584659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.589742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.590038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.590066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.594918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.595211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.595240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.600010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.600334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.600363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.605197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.605499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.605528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.610318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.610644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.610673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.615457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.615744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.615773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.620584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.620947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.620975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.625874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.626239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.626298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.631142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.631514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.631568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.636333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.636662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.636691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.641286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.641567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.641596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.646662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.646943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.646972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.652032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.652360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.652390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.977 [2024-10-30 12:37:39.657141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:06.977 [2024-10-30 12:37:39.657475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.977 [2024-10-30 12:37:39.657505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.662554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.662890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.662935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.667770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.668057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.668085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.672904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.673197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.673225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.677882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.678164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.678194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.682862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.683140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.683169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.687970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.688168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.688197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.694543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.694832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.694860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.700979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.701344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.701373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.707481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.707774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.707803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.714906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.714996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.715024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.721769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.722170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.722210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.727027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.727375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.727403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.732148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.732458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.732487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.737202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.737504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.737533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.742726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.743140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.743168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.749410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.749733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.749762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.754845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.755212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.755240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.759915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.760230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.760268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.764999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.765360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.765390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.770061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.770354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.770383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.775457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.775751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.775780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.781763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.782060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.782090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.788084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.788403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.788432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.794891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.795271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.795300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.801732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.802093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.802122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.808284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.808637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.808665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.814920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.815328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.815370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.821671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.822038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.822066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.827755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.828089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.828118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.834094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.236 [2024-10-30 12:37:39.834461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.236 [2024-10-30 12:37:39.834490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.236 [2024-10-30 12:37:39.840594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.840913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.840941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.846952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.847276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.847320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.853386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.853717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.853745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.859878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.860186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.860215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.866684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.867000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.867029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.873384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.873590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.873620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.880526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.880828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.880857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.887379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.887687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.887715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.894605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.894975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.895014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.902125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.902495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.902523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.909268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.909551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.909595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.237 [2024-10-30 12:37:39.917063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.237 [2024-10-30 12:37:39.917458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.237 [2024-10-30 12:37:39.917508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.924365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.924687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.924716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.930981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.931306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.931334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.936199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.936489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.936518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.941217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.941541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.941570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.946191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.946483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.946512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.951243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.951553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.951581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.956331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.956629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.956672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.961453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.961778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.961806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.966469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.966788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.966816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.972053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.972385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.972415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.977706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.978064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.978097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.983515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.983844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.983872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.989495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.989848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.989877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.994682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:39.995059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:39.995099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.496 [2024-10-30 12:37:39.999806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.496 [2024-10-30 12:37:40.000106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.496 [2024-10-30 12:37:40.000135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.005022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.005415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.005456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.010746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.011052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.011085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.015844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.016160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.016190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.021912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.022225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.022281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.027515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.027844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.027875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.033054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.033412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.033446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.039593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.039943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.039975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.046069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.046353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.046385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.052303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.052595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.052626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.057473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.057763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.057792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.062644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.063002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.063032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.068145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.068470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.068500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.073210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.073506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.073544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.078987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.079302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.079342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.085423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.085720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.085766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.091764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.092049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.092079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.098176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.098470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.098501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.104657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.104953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.104983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.109716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.110010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.110039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.114814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.115127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.115157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.120043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.120368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.120399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.126399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.126709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.126739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.132253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.132546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.132577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.137350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.137664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.137694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.142478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.142820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.142849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.147506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.147819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.147850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.152606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.152932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.152961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.157709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.158039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.497 [2024-10-30 12:37:40.158068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.497 [2024-10-30 12:37:40.162799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.497 [2024-10-30 12:37:40.163093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.498 [2024-10-30 12:37:40.163137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.498 [2024-10-30 12:37:40.167778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.498 [2024-10-30 12:37:40.168096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.498 [2024-10-30 12:37:40.168125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.498 [2024-10-30 12:37:40.172856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.498 [2024-10-30 12:37:40.173197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.498 [2024-10-30 12:37:40.173227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.498 [2024-10-30 12:37:40.177882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.498 [2024-10-30 12:37:40.178178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.498 [2024-10-30 12:37:40.178208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.758 [2024-10-30 12:37:40.182801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.758 [2024-10-30 12:37:40.183144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.758 [2024-10-30 12:37:40.183175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.758 [2024-10-30 12:37:40.188025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.758 [2024-10-30 12:37:40.188369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.758 [2024-10-30 12:37:40.188399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.758 [2024-10-30 12:37:40.193191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.758 [2024-10-30 12:37:40.193483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.758 [2024-10-30 12:37:40.193514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.758 [2024-10-30 12:37:40.198376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.758 [2024-10-30 12:37:40.198646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.758 [2024-10-30 12:37:40.198676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.758 [2024-10-30 12:37:40.203737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.758 [2024-10-30 12:37:40.203806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.203835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.209742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.210047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.210076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.215394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.215701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.215734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.221154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.221511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.221541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.226869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.227167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.227196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.232674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.232982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.233010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.238476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.238783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.238813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.245576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.245661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.245689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.251693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.252040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.252070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.256893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.257181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.257223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.262008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.262378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.262422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.267208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.267498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.267528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.272769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.273053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.273099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.279291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.279601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.279629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.285908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.286179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.286222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.292341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.292621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.292652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.299136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.299223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.299251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.306423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.306725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.306755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.313061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.313383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.313413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.319096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.319416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.319446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.324796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.325077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.325121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.330395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.330732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.330762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.336074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.336364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.336403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.341631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.341932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.341962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.347224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.347529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.347558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.352885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.353199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.353228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.358147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.358467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.358498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.759 [2024-10-30 12:37:40.363610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.759 [2024-10-30 12:37:40.363906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.759 [2024-10-30 12:37:40.363935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.369155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.369477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.369513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.374842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.375140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.375170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.380253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.380580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.380623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.385973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.386327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.386357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.391588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.391977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.392007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.397639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.397959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.397988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.402765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.403071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.403100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.407805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.408143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.408173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.412899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.413190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.413219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.417876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.418158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.418188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.422908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.423239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.423291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.427923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.428217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.428247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.433186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.433557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.433587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.760 [2024-10-30 12:37:40.438423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:07.760 [2024-10-30 12:37:40.438736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.760 [2024-10-30 12:37:40.438765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.022 [2024-10-30 12:37:40.443501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.022 [2024-10-30 12:37:40.443848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.022 [2024-10-30 12:37:40.443879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.022 [2024-10-30 12:37:40.448616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.022 [2024-10-30 12:37:40.448945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.022 [2024-10-30 12:37:40.448975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.022 [2024-10-30 12:37:40.453788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.022 [2024-10-30 12:37:40.454084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.022 [2024-10-30 12:37:40.454114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.022 [2024-10-30 12:37:40.458840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.022 [2024-10-30 12:37:40.459164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.022 [2024-10-30 12:37:40.459200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.022 [2024-10-30 12:37:40.463963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.022 [2024-10-30 12:37:40.464289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.022 [2024-10-30 12:37:40.464319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.022 [2024-10-30 12:37:40.468958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.022 [2024-10-30 12:37:40.469241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.022 [2024-10-30 12:37:40.469279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.022 [2024-10-30 12:37:40.474150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.022 [2024-10-30 12:37:40.474445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.022 [2024-10-30 12:37:40.474475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.022 [2024-10-30 12:37:40.479599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.022 [2024-10-30 12:37:40.479909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.022 [2024-10-30 12:37:40.479940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.022 [2024-10-30 12:37:40.484577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.022 [2024-10-30 12:37:40.484898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.022 [2024-10-30 12:37:40.484927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.489670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.489995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.490024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.495004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.495354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.495385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.500312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.500623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.500666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.505444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.505763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.505793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.510515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.510818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.510863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.515583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.515900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.515929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.023 5417.00 IOPS, 677.12 MiB/s [2024-10-30T11:37:40.704Z] [2024-10-30 12:37:40.522083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.522406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.522437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.527106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.527426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.527458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.532195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.532510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.532538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.537282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.537565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.537596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.542278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.542562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.542606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.547409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.547695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.547725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.552436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.552741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.552771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.557478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.557793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.557823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.562499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.562809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.562840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.567481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.567792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.567820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.573021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.573366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.573396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.578424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.578751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.578781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.583423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.583732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.583762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.588494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.588835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.588865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.593521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.593817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.593852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.598527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.598865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.598912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.604381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.604695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.604740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.609406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.609730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.609760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.614427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.614739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.614769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.619473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.619808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.619839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.624600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.624920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.624950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.629838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.023 [2024-10-30 12:37:40.630139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.023 [2024-10-30 12:37:40.630167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.023 [2024-10-30 12:37:40.634898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.635254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.635292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.639972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.640312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.640342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.645402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.645708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.645750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.651879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.652186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.652216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.657600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.657686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.657715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.664044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.664475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.664504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.670371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.670671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.670700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.676619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.676971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.677001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.682457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.682526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.682552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.687813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.688299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.688344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.693493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.693775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.693805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.698312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.698597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.698626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.024 [2024-10-30 12:37:40.703324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.024 [2024-10-30 12:37:40.703657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.024 [2024-10-30 12:37:40.703687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.286 [2024-10-30 12:37:40.708436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.286 [2024-10-30 12:37:40.708760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.286 [2024-10-30 12:37:40.708790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.286 [2024-10-30 12:37:40.713706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.286 [2024-10-30 12:37:40.714001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.286 [2024-10-30 12:37:40.714029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.286 [2024-10-30 12:37:40.718935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.286 [2024-10-30 12:37:40.719230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.286 [2024-10-30 12:37:40.719266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.286 [2024-10-30 12:37:40.725062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.286 [2024-10-30 12:37:40.725146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.286 [2024-10-30 12:37:40.725174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.731116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.731425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.731456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.737890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.738116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.738167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.743876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.744144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.744174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.748991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.749295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.749326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.754045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.754350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.754380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.759112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.759465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.759494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.764409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.764861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.764890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.769696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.769995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.770024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.774955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.775284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.775313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.780009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.780354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.780384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.785145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.785465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.785495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.790284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.790582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.790612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.795319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.795604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.795633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.800961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.801229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.801282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.806246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.806553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.806582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.811349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.811680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.811709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.816508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.816809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.816838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.821638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.821940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.821970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.826771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.827072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.827107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.831980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.832301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.832330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.837034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.837373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.837403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.842295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.842620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.842650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.287 [2024-10-30 12:37:40.847463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.287 [2024-10-30 12:37:40.847755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.287 [2024-10-30 12:37:40.847785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.852510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.852808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.852837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.858043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.858378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.858423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.863723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.864055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.864084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.868798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.869158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.869188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.873968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.874263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.874294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.879227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.879571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.879600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.885601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.885975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.886004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.890979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.891285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.891315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.896107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.896453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.896483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.901191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.901521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.901552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.906398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.906710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.906740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.911427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.911744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.911773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.916447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.916766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.916794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.921474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.921815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.921844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.926623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.926918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.926949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.931663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.931975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.932005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.936876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.937169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.937198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.941867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.942223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.942252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.947223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.947517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.947547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.952231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.952541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.952572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.957213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.957518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.288 [2024-10-30 12:37:40.957564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.288 [2024-10-30 12:37:40.962322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.288 [2024-10-30 12:37:40.962612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.289 [2024-10-30 12:37:40.962646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.289 [2024-10-30 12:37:40.967427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.289 [2024-10-30 12:37:40.967729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.289 [2024-10-30 12:37:40.967759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:40.972701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:40.973037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:40.973067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:40.977997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:40.978304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:40.978335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:40.983042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:40.983365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:40.983396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:40.988016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:40.988332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:40.988363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:40.993031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:40.993363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:40.993394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:40.998883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:40.999178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:40.999207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.005101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.005432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.005462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.011650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.011965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.011994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.018671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.018989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.019018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.026053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.026402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.026434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.033094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.033437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.033467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.039507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.039849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.039878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.045124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.045534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.045578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.051885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.052167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.052197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.057165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.057428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.057458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.061946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.062210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.062239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.067017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.067309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.067339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.073010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.073397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.073428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.079214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.079518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.079547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.085310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.085612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.085642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.092177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.092491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.092520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.098095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.098375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.098406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.103600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.103854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.103884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.109239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.109503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.109533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.115178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.115439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.551 [2024-10-30 12:37:41.115475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.551 [2024-10-30 12:37:41.120123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.551 [2024-10-30 12:37:41.120392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.120422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.124808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.125061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.125091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.129504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.129768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.129799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.135224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.135510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.135539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.141351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.141617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.141647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.147280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.147565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.147610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.153362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.153690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.153720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.160346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.160685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.160715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.166291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.166546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.166590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.171221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.171483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.171513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.176310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.176566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.176597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.181524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.181776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.181805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.186622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.186875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.186906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.191573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.191857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.191889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.196230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.196492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.196523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.200812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.201077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.201107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.205964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.206215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.206271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.211009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.211295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.211325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.216104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.216373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.216403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.221778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.222030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.222061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.226568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.226886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.226916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.552 [2024-10-30 12:37:41.231215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.552 [2024-10-30 12:37:41.231476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.552 [2024-10-30 12:37:41.231507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.235805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.236071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.236101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.240647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.240910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.240940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.245418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.245684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.245713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.250046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.250328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.250373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.254870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.255122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.255151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.259553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.259848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.259878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.264270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.264556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.264587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.268992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.269287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.269317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.273670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.273964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.273995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.278309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.278565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.278595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.282906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.283169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.283200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.287517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.287770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.287801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.292173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.292433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.292464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.296947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.297214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.297242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.301679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.301943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.301973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.814 [2024-10-30 12:37:41.306408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.814 [2024-10-30 12:37:41.306673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.814 [2024-10-30 12:37:41.306718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.311186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.311534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.311564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.316061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.316390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.316420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.320913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.321163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.321193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.325650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.325915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.325943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.330414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.330668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.330703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.335786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.336051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.336095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.341881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.342157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.342187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.348144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.348439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.348469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.354906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.355198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.355227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.361675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.361955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.361985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.367525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.367778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.367808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.372206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.372488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.372518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.376969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.377221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.377251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.381608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.381868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.381897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.386849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.387101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.387131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.392868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.393238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.393276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.399331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.399603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.399633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.406049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.406363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.406394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.413063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.413340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.413370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.419719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.419978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.420007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.426681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.426951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.426981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.433561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.433919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.433948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.440531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.440804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.440835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.447601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.447856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.447886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.454572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.815 [2024-10-30 12:37:41.454853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.815 [2024-10-30 12:37:41.454884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.815 [2024-10-30 12:37:41.461185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.816 [2024-10-30 12:37:41.461456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.816 [2024-10-30 12:37:41.461486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.816 [2024-10-30 12:37:41.468276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.816 [2024-10-30 12:37:41.468529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.816 [2024-10-30 12:37:41.468559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.816 [2024-10-30 12:37:41.474936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.816 [2024-10-30 12:37:41.475214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.816 [2024-10-30 12:37:41.475245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.816 [2024-10-30 12:37:41.481821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.816 [2024-10-30 12:37:41.482090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.816 [2024-10-30 12:37:41.482119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.816 [2024-10-30 12:37:41.487440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.816 [2024-10-30 12:37:41.487706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.816 [2024-10-30 12:37:41.487736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.816 [2024-10-30 12:37:41.492170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:08.816 [2024-10-30 12:37:41.492431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.816 [2024-10-30 12:37:41.492473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.074 [2024-10-30 12:37:41.496968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:09.074 [2024-10-30 12:37:41.497262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.074 [2024-10-30 12:37:41.497305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.074 [2024-10-30 12:37:41.501726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:09.074 [2024-10-30 12:37:41.502010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.074 [2024-10-30 12:37:41.502041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.074 [2024-10-30 12:37:41.506384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:09.074 [2024-10-30 12:37:41.506682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.074 [2024-10-30 12:37:41.506713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.075 [2024-10-30 12:37:41.511363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:09.075 [2024-10-30 12:37:41.511660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.075 [2024-10-30 12:37:41.511690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.075 [2024-10-30 12:37:41.516180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:09.075 [2024-10-30 12:37:41.516450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.075 [2024-10-30 12:37:41.516481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.075 [2024-10-30 12:37:41.521038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754550) with pdu=0x2000166fef90 00:26:09.075 [2024-10-30 12:37:41.521327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.075 [2024-10-30 12:37:41.521357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.075 5566.00 IOPS, 695.75 MiB/s 00:26:09.075 Latency(us) 00:26:09.075 [2024-10-30T11:37:41.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.075 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:09.075 nvme0n1 : 2.00 5565.21 695.65 0.00 0.00 2868.02 1905.40 11553.75 00:26:09.075 [2024-10-30T11:37:41.756Z] =================================================================================================================== 00:26:09.075 [2024-10-30T11:37:41.756Z] Total : 5565.21 695.65 0.00 0.00 2868.02 1905.40 11553.75 00:26:09.075 { 00:26:09.075 "results": [ 00:26:09.075 { 00:26:09.075 "job": "nvme0n1", 00:26:09.075 "core_mask": "0x2", 00:26:09.075 "workload": "randwrite", 00:26:09.075 "status": "finished", 00:26:09.075 "queue_depth": 16, 00:26:09.075 "io_size": 131072, 00:26:09.075 "runtime": 2.004058, 00:26:09.075 "iops": 5565.208192577261, 00:26:09.075 "mibps": 695.6510240721576, 00:26:09.075 "io_failed": 0, 00:26:09.075 "io_timeout": 0, 00:26:09.075 "avg_latency_us": 2868.018571585124, 00:26:09.075 "min_latency_us": 1905.3985185185186, 00:26:09.075 "max_latency_us": 11553.754074074073 00:26:09.075 } 00:26:09.075 ], 00:26:09.075 "core_count": 1 00:26:09.075 } 00:26:09.075 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:09.075 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:09.075 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:09.075 | .driver_specific 00:26:09.075 | .nvme_error 00:26:09.075 | .status_code 00:26:09.075 | .command_transient_transport_error' 00:26:09.075 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 359 > 0 )) 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 718317 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 718317 ']' 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 718317 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 718317 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 718317' 00:26:09.335 killing process with pid 718317 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 718317 00:26:09.335 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.335 00:26:09.335 Latency(us) 00:26:09.335 [2024-10-30T11:37:42.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.335 [2024-10-30T11:37:42.016Z] =================================================================================================================== 00:26:09.335 [2024-10-30T11:37:42.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.335 12:37:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 718317 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 716946 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 716946 ']' 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 716946 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 716946 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 716946' 00:26:09.595 killing process with pid 716946 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 716946 00:26:09.595 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 716946 00:26:09.854 00:26:09.854 real 0m15.449s 00:26:09.854 user 0m30.487s 00:26:09.854 sys 0m4.313s 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:09.854 ************************************ 00:26:09.854 END TEST nvmf_digest_error 00:26:09.854 ************************************ 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:09.854 rmmod nvme_tcp 00:26:09.854 rmmod nvme_fabrics 00:26:09.854 rmmod nvme_keyring 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 716946 ']' 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 716946 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 716946 ']' 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 716946 00:26:09.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (716946) - No such process 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 716946 is not found' 00:26:09.854 Process with pid 716946 is not found 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.854 12:37:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:12.395 00:26:12.395 real 0m35.727s 00:26:12.395 user 1m1.781s 00:26:12.395 sys 0m10.667s 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:12.395 ************************************ 00:26:12.395 END TEST nvmf_digest 00:26:12.395 ************************************ 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.395 ************************************ 00:26:12.395 START TEST nvmf_bdevperf 00:26:12.395 ************************************ 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:12.395 * Looking for test storage... 00:26:12.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:12.395 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:12.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.396 --rc genhtml_branch_coverage=1 00:26:12.396 --rc genhtml_function_coverage=1 00:26:12.396 --rc genhtml_legend=1 00:26:12.396 --rc geninfo_all_blocks=1 00:26:12.396 --rc geninfo_unexecuted_blocks=1 00:26:12.396 00:26:12.396 ' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:12.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.396 --rc genhtml_branch_coverage=1 00:26:12.396 --rc genhtml_function_coverage=1 00:26:12.396 --rc genhtml_legend=1 00:26:12.396 --rc geninfo_all_blocks=1 00:26:12.396 --rc geninfo_unexecuted_blocks=1 00:26:12.396 00:26:12.396 ' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:12.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.396 --rc genhtml_branch_coverage=1 00:26:12.396 --rc genhtml_function_coverage=1 00:26:12.396 --rc genhtml_legend=1 00:26:12.396 --rc geninfo_all_blocks=1 00:26:12.396 --rc geninfo_unexecuted_blocks=1 00:26:12.396 00:26:12.396 ' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:12.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.396 --rc genhtml_branch_coverage=1 00:26:12.396 --rc genhtml_function_coverage=1 00:26:12.396 --rc genhtml_legend=1 00:26:12.396 --rc geninfo_all_blocks=1 00:26:12.396 --rc geninfo_unexecuted_blocks=1 00:26:12.396 00:26:12.396 ' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:12.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:12.396 12:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.297 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:14.298 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:14.298 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:14.298 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:14.298 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:14.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:26:14.298 00:26:14.298 --- 10.0.0.2 ping statistics --- 00:26:14.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.298 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:26:14.298 00:26:14.298 --- 10.0.0.1 ping statistics --- 00:26:14.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.298 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.298 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=720685 00:26:14.299 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:14.299 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 720685 00:26:14.299 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 720685 ']' 00:26:14.299 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.299 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:14.299 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.299 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:14.299 12:37:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.557 [2024-10-30 12:37:47.009826] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:26:14.557 [2024-10-30 12:37:47.009901] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.557 [2024-10-30 12:37:47.082674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:14.557 [2024-10-30 12:37:47.142567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.557 [2024-10-30 12:37:47.142617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.557 [2024-10-30 12:37:47.142641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.557 [2024-10-30 12:37:47.142651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.557 [2024-10-30 12:37:47.142662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.557 [2024-10-30 12:37:47.144059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.557 [2024-10-30 12:37:47.144136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.557 [2024-10-30 12:37:47.144139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.815 [2024-10-30 12:37:47.277073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.815 Malloc0 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.815 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.816 [2024-10-30 12:37:47.332983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.816 { 00:26:14.816 "params": { 00:26:14.816 "name": "Nvme$subsystem", 00:26:14.816 "trtype": "$TEST_TRANSPORT", 00:26:14.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.816 "adrfam": "ipv4", 00:26:14.816 "trsvcid": "$NVMF_PORT", 00:26:14.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.816 "hdgst": ${hdgst:-false}, 00:26:14.816 "ddgst": ${ddgst:-false} 00:26:14.816 }, 00:26:14.816 "method": "bdev_nvme_attach_controller" 00:26:14.816 } 00:26:14.816 EOF 00:26:14.816 )") 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:14.816 12:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:14.816 "params": { 00:26:14.816 "name": "Nvme1", 00:26:14.816 "trtype": "tcp", 00:26:14.816 "traddr": "10.0.0.2", 00:26:14.816 "adrfam": "ipv4", 00:26:14.816 "trsvcid": "4420", 00:26:14.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:14.816 "hdgst": false, 00:26:14.816 "ddgst": false 00:26:14.816 }, 00:26:14.816 "method": "bdev_nvme_attach_controller" 00:26:14.816 }' 00:26:14.816 [2024-10-30 12:37:47.381591] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:26:14.816 [2024-10-30 12:37:47.381703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720823 ] 00:26:14.816 [2024-10-30 12:37:47.449640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.075 [2024-10-30 12:37:47.508914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.075 Running I/O for 1 seconds... 00:26:16.455 8470.00 IOPS, 33.09 MiB/s 00:26:16.455 Latency(us) 00:26:16.455 [2024-10-30T11:37:49.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.455 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:16.455 Verification LBA range: start 0x0 length 0x4000 00:26:16.455 Nvme1n1 : 1.02 8503.39 33.22 0.00 0.00 14987.45 3276.80 12913.02 00:26:16.455 [2024-10-30T11:37:49.136Z] =================================================================================================================== 00:26:16.455 [2024-10-30T11:37:49.136Z] Total : 8503.39 33.22 0.00 0.00 14987.45 3276.80 12913.02 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=720971 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.455 { 00:26:16.455 "params": { 00:26:16.455 "name": "Nvme$subsystem", 00:26:16.455 "trtype": "$TEST_TRANSPORT", 00:26:16.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.455 "adrfam": "ipv4", 00:26:16.455 "trsvcid": "$NVMF_PORT", 00:26:16.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.455 "hdgst": ${hdgst:-false}, 00:26:16.455 "ddgst": ${ddgst:-false} 00:26:16.455 }, 00:26:16.455 "method": "bdev_nvme_attach_controller" 00:26:16.455 } 00:26:16.455 EOF 00:26:16.455 )") 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:16.455 12:37:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:16.455 "params": { 00:26:16.455 "name": "Nvme1", 00:26:16.455 "trtype": "tcp", 00:26:16.455 "traddr": "10.0.0.2", 00:26:16.455 "adrfam": "ipv4", 00:26:16.455 "trsvcid": "4420", 00:26:16.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:16.455 "hdgst": false, 00:26:16.455 "ddgst": false 00:26:16.455 }, 00:26:16.455 "method": "bdev_nvme_attach_controller" 00:26:16.455 }' 00:26:16.455 [2024-10-30 12:37:48.999174] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:26:16.455 [2024-10-30 12:37:48.999292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720971 ] 00:26:16.455 [2024-10-30 12:37:49.067637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.455 [2024-10-30 12:37:49.123899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.712 Running I/O for 15 seconds... 00:26:19.080 8433.00 IOPS, 32.94 MiB/s [2024-10-30T11:37:52.025Z] 8452.00 IOPS, 33.02 MiB/s [2024-10-30T11:37:52.025Z] 12:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 720685 00:26:19.344 12:37:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:19.344 [2024-10-30 12:37:51.967870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.967925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.967954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.967971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.967997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.344 [2024-10-30 12:37:51.968526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.344 [2024-10-30 12:37:51.968539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.968985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.968998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.345 [2024-10-30 12:37:51.969762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.345 [2024-10-30 12:37:51.969775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.969787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.969800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.969812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.969825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.969838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.969851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.969863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.969876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.969889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.969903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.969915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.969928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.969940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.969954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.969966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.969979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.969991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.346 [2024-10-30 12:37:51.970928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.346 [2024-10-30 12:37:51.970940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.970954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.970966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.970979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.970991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.347 [2024-10-30 12:37:51.971230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.347 [2024-10-30 12:37:51.971286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.347 [2024-10-30 12:37:51.971327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.347 [2024-10-30 12:37:51.971357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.347 [2024-10-30 12:37:51.971387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.347 [2024-10-30 12:37:51.971417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.347 [2024-10-30 12:37:51.971446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.347 [2024-10-30 12:37:51.971823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.971835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134cb30 is same with the state(6) to be set 00:26:19.347 [2024-10-30 12:37:51.971851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:19.347 [2024-10-30 12:37:51.971861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:19.347 [2024-10-30 12:37:51.971871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49504 len:8 PRP1 0x0 PRP2 0x0 00:26:19.347 [2024-10-30 12:37:51.971883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.347 [2024-10-30 12:37:51.975050] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.347 [2024-10-30 12:37:51.975126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.347 [2024-10-30 12:37:51.975841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-10-30 12:37:51.975870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.347 [2024-10-30 12:37:51.975888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.347 [2024-10-30 12:37:51.976105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.347 [2024-10-30 12:37:51.976367] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.347 [2024-10-30 12:37:51.976389] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.347 [2024-10-30 12:37:51.976404] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.347 [2024-10-30 12:37:51.979677] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.347 [2024-10-30 12:37:51.988664] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.347 [2024-10-30 12:37:51.989081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-10-30 12:37:51.989111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.347 [2024-10-30 12:37:51.989127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.347 [2024-10-30 12:37:51.989365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.347 [2024-10-30 12:37:51.989588] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.347 [2024-10-30 12:37:51.989610] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.347 [2024-10-30 12:37:51.989623] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.347 [2024-10-30 12:37:51.992559] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.347 [2024-10-30 12:37:52.001705] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.347 [2024-10-30 12:37:52.002060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.347 [2024-10-30 12:37:52.002088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.347 [2024-10-30 12:37:52.002105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.348 [2024-10-30 12:37:52.002353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.348 [2024-10-30 12:37:52.002590] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.348 [2024-10-30 12:37:52.002613] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.348 [2024-10-30 12:37:52.002626] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.348 [2024-10-30 12:37:52.005515] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.348 [2024-10-30 12:37:52.014768] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.348 [2024-10-30 12:37:52.015180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.348 [2024-10-30 12:37:52.015210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.348 [2024-10-30 12:37:52.015226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.348 [2024-10-30 12:37:52.015494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.348 [2024-10-30 12:37:52.015721] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.348 [2024-10-30 12:37:52.015742] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.348 [2024-10-30 12:37:52.015760] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.348 [2024-10-30 12:37:52.018635] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.611 [2024-10-30 12:37:52.028084] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.611 [2024-10-30 12:37:52.028456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.611 [2024-10-30 12:37:52.028486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.611 [2024-10-30 12:37:52.028503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.611 [2024-10-30 12:37:52.028755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.611 [2024-10-30 12:37:52.028959] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.611 [2024-10-30 12:37:52.028980] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.611 [2024-10-30 12:37:52.028992] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.611 [2024-10-30 12:37:52.031874] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.611 [2024-10-30 12:37:52.041173] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.611 [2024-10-30 12:37:52.041515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.611 [2024-10-30 12:37:52.041544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.611 [2024-10-30 12:37:52.041561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.611 [2024-10-30 12:37:52.041777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.611 [2024-10-30 12:37:52.041981] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.611 [2024-10-30 12:37:52.042002] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.611 [2024-10-30 12:37:52.042015] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.611 [2024-10-30 12:37:52.044891] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.611 [2024-10-30 12:37:52.054261] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.611 [2024-10-30 12:37:52.054670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.611 [2024-10-30 12:37:52.054698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.611 [2024-10-30 12:37:52.054715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.611 [2024-10-30 12:37:52.054950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.611 [2024-10-30 12:37:52.055155] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.611 [2024-10-30 12:37:52.055176] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.611 [2024-10-30 12:37:52.055188] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.611 [2024-10-30 12:37:52.058066] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.611 [2024-10-30 12:37:52.067507] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.611 [2024-10-30 12:37:52.067904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.611 [2024-10-30 12:37:52.067934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.611 [2024-10-30 12:37:52.067951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.611 [2024-10-30 12:37:52.068168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.611 [2024-10-30 12:37:52.068420] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.611 [2024-10-30 12:37:52.068442] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.611 [2024-10-30 12:37:52.068456] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.611 [2024-10-30 12:37:52.071328] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.611 [2024-10-30 12:37:52.080591] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.611 [2024-10-30 12:37:52.080996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.611 [2024-10-30 12:37:52.081024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.611 [2024-10-30 12:37:52.081040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.611 [2024-10-30 12:37:52.081289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.611 [2024-10-30 12:37:52.081491] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.611 [2024-10-30 12:37:52.081513] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.611 [2024-10-30 12:37:52.081526] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.611 [2024-10-30 12:37:52.084564] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.611 [2024-10-30 12:37:52.093858] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.611 [2024-10-30 12:37:52.094272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.611 [2024-10-30 12:37:52.094316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.611 [2024-10-30 12:37:52.094334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.611 [2024-10-30 12:37:52.094575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.611 [2024-10-30 12:37:52.094780] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.611 [2024-10-30 12:37:52.094800] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.611 [2024-10-30 12:37:52.094813] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.611 [2024-10-30 12:37:52.097686] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.611 [2024-10-30 12:37:52.107002] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.611 [2024-10-30 12:37:52.107383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.611 [2024-10-30 12:37:52.107412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.611 [2024-10-30 12:37:52.107434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.611 [2024-10-30 12:37:52.107658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.611 [2024-10-30 12:37:52.107864] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.611 [2024-10-30 12:37:52.107884] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.611 [2024-10-30 12:37:52.107897] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.611 [2024-10-30 12:37:52.110895] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.611 [2024-10-30 12:37:52.120149] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.611 [2024-10-30 12:37:52.120487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.611 [2024-10-30 12:37:52.120516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.611 [2024-10-30 12:37:52.120532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.611 [2024-10-30 12:37:52.120764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.611 [2024-10-30 12:37:52.120968] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.611 [2024-10-30 12:37:52.120987] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.611 [2024-10-30 12:37:52.121000] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.611 [2024-10-30 12:37:52.123873] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.611 [2024-10-30 12:37:52.133127] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.611 [2024-10-30 12:37:52.133546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.611 [2024-10-30 12:37:52.133575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.611 [2024-10-30 12:37:52.133591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.611 [2024-10-30 12:37:52.133822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.611 [2024-10-30 12:37:52.134026] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.611 [2024-10-30 12:37:52.134047] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.611 [2024-10-30 12:37:52.134060] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.611 [2024-10-30 12:37:52.136980] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.611 [2024-10-30 12:37:52.146170] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.611 [2024-10-30 12:37:52.146498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.611 [2024-10-30 12:37:52.146527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.611 [2024-10-30 12:37:52.146543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.146760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.146970] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.146989] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.147002] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.149922] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.612 [2024-10-30 12:37:52.159440] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.612 [2024-10-30 12:37:52.159803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.612 [2024-10-30 12:37:52.159832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.612 [2024-10-30 12:37:52.159848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.160084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.160317] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.160339] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.160352] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.163199] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.612 [2024-10-30 12:37:52.172621] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.612 [2024-10-30 12:37:52.173032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.612 [2024-10-30 12:37:52.173060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.612 [2024-10-30 12:37:52.173076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.173325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.173558] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.173594] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.173608] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.176509] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.612 [2024-10-30 12:37:52.185720] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.612 [2024-10-30 12:37:52.186123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.612 [2024-10-30 12:37:52.186150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.612 [2024-10-30 12:37:52.186167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.186424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.186634] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.186654] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.186671] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.189546] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.612 [2024-10-30 12:37:52.198918] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.612 [2024-10-30 12:37:52.199270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.612 [2024-10-30 12:37:52.199299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.612 [2024-10-30 12:37:52.199315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.199554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.199758] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.199779] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.199791] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.202671] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.612 [2024-10-30 12:37:52.212038] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.612 [2024-10-30 12:37:52.212400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.612 [2024-10-30 12:37:52.212430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.612 [2024-10-30 12:37:52.212447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.212682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.212886] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.212905] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.212918] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.215717] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.612 [2024-10-30 12:37:52.225187] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.612 [2024-10-30 12:37:52.225559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.612 [2024-10-30 12:37:52.225603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.612 [2024-10-30 12:37:52.225620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.225854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.226057] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.226078] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.226092] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.229142] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.612 [2024-10-30 12:37:52.238492] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.612 [2024-10-30 12:37:52.239001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.612 [2024-10-30 12:37:52.239054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.612 [2024-10-30 12:37:52.239070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.239331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.239544] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.239569] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.239583] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.242589] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.612 [2024-10-30 12:37:52.251767] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.612 [2024-10-30 12:37:52.252112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.612 [2024-10-30 12:37:52.252142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.612 [2024-10-30 12:37:52.252159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.252428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.252642] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.252664] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.252677] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.255772] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.612 [2024-10-30 12:37:52.264882] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.612 [2024-10-30 12:37:52.265297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.612 [2024-10-30 12:37:52.265328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.612 [2024-10-30 12:37:52.265345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.265585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.265789] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.265809] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.265823] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.268695] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.612 [2024-10-30 12:37:52.278095] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.612 [2024-10-30 12:37:52.278455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.612 [2024-10-30 12:37:52.278485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.612 [2024-10-30 12:37:52.278507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.612 [2024-10-30 12:37:52.278738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.612 [2024-10-30 12:37:52.278929] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.612 [2024-10-30 12:37:52.278950] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.612 [2024-10-30 12:37:52.278963] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.612 [2024-10-30 12:37:52.281837] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.613 [2024-10-30 12:37:52.291437] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.613 [2024-10-30 12:37:52.291859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.613 [2024-10-30 12:37:52.291888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.613 [2024-10-30 12:37:52.291904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.613 [2024-10-30 12:37:52.292146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.875 [2024-10-30 12:37:52.292398] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.875 [2024-10-30 12:37:52.292423] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.875 [2024-10-30 12:37:52.292437] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.875 [2024-10-30 12:37:52.295423] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.875 [2024-10-30 12:37:52.304574] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.875 [2024-10-30 12:37:52.304982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.875 [2024-10-30 12:37:52.305010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.875 [2024-10-30 12:37:52.305028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.875 [2024-10-30 12:37:52.305275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.875 [2024-10-30 12:37:52.305489] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.875 [2024-10-30 12:37:52.305511] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.875 [2024-10-30 12:37:52.305525] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.875 [2024-10-30 12:37:52.308413] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.875 7484.00 IOPS, 29.23 MiB/s [2024-10-30T11:37:52.556Z] [2024-10-30 12:37:52.318905] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.875 [2024-10-30 12:37:52.319216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.875 [2024-10-30 12:37:52.319244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.875 [2024-10-30 12:37:52.319283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.875 [2024-10-30 12:37:52.319510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.875 [2024-10-30 12:37:52.319739] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.875 [2024-10-30 12:37:52.319760] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.875 [2024-10-30 12:37:52.319772] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.875 [2024-10-30 12:37:52.322644] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.875 [2024-10-30 12:37:52.332008] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.875 [2024-10-30 12:37:52.332416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.876 [2024-10-30 12:37:52.332445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.876 [2024-10-30 12:37:52.332461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.876 [2024-10-30 12:37:52.332696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.876 [2024-10-30 12:37:52.332900] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.876 [2024-10-30 12:37:52.332921] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.876 [2024-10-30 12:37:52.332933] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.876 [2024-10-30 12:37:52.335806] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.876 [2024-10-30 12:37:52.344971] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.876 [2024-10-30 12:37:52.345377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.876 [2024-10-30 12:37:52.345405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.876 [2024-10-30 12:37:52.345421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.876 [2024-10-30 12:37:52.345652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.876 [2024-10-30 12:37:52.345857] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.876 [2024-10-30 12:37:52.345877] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.876 [2024-10-30 12:37:52.345889] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.876 [2024-10-30 12:37:52.348761] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.876 [2024-10-30 12:37:52.357968] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.876 [2024-10-30 12:37:52.358312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.876 [2024-10-30 12:37:52.358340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.876 [2024-10-30 12:37:52.358356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.876 [2024-10-30 12:37:52.358592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.876 [2024-10-30 12:37:52.358795] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.876 [2024-10-30 12:37:52.358817] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.876 [2024-10-30 12:37:52.358836] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.876 [2024-10-30 12:37:52.361712] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.876 [2024-10-30 12:37:52.371047] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.876 [2024-10-30 12:37:52.371363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.876 [2024-10-30 12:37:52.371392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.876 [2024-10-30 12:37:52.371408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.876 [2024-10-30 12:37:52.371620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.876 [2024-10-30 12:37:52.371825] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.876 [2024-10-30 12:37:52.371846] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.876 [2024-10-30 12:37:52.371859] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.876 [2024-10-30 12:37:52.374738] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.876 [2024-10-30 12:37:52.384144] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.876 [2024-10-30 12:37:52.384478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.876 [2024-10-30 12:37:52.384508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.876 [2024-10-30 12:37:52.384525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.876 [2024-10-30 12:37:52.384743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.876 [2024-10-30 12:37:52.384948] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.876 [2024-10-30 12:37:52.384969] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.876 [2024-10-30 12:37:52.384982] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.876 [2024-10-30 12:37:52.387861] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.876 [2024-10-30 12:37:52.397143] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.876 [2024-10-30 12:37:52.397496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.876 [2024-10-30 12:37:52.397524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.876 [2024-10-30 12:37:52.397539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.876 [2024-10-30 12:37:52.397771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.876 [2024-10-30 12:37:52.397960] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.876 [2024-10-30 12:37:52.397980] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.876 [2024-10-30 12:37:52.397992] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.876 [2024-10-30 12:37:52.400892] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.876 [2024-10-30 12:37:52.410294] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.876 [2024-10-30 12:37:52.410648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.876 [2024-10-30 12:37:52.410677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.876 [2024-10-30 12:37:52.410693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.876 [2024-10-30 12:37:52.410928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.876 [2024-10-30 12:37:52.411133] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.876 [2024-10-30 12:37:52.411153] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.876 [2024-10-30 12:37:52.411165] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.876 [2024-10-30 12:37:52.414137] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.876 [2024-10-30 12:37:52.423622] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.876 [2024-10-30 12:37:52.423951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.876 [2024-10-30 12:37:52.423979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.876 [2024-10-30 12:37:52.423995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.876 [2024-10-30 12:37:52.424198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.876 [2024-10-30 12:37:52.424452] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.876 [2024-10-30 12:37:52.424475] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.876 [2024-10-30 12:37:52.424489] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.876 [2024-10-30 12:37:52.427655] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.876 [2024-10-30 12:37:52.437082] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.876 [2024-10-30 12:37:52.437426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.876 [2024-10-30 12:37:52.437455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.876 [2024-10-30 12:37:52.437472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.876 [2024-10-30 12:37:52.437716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.877 [2024-10-30 12:37:52.437926] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.877 [2024-10-30 12:37:52.437947] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.877 [2024-10-30 12:37:52.437961] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.877 [2024-10-30 12:37:52.440971] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.877 [2024-10-30 12:37:52.450432] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.877 [2024-10-30 12:37:52.450803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.877 [2024-10-30 12:37:52.450833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.877 [2024-10-30 12:37:52.450855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.877 [2024-10-30 12:37:52.451099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.877 [2024-10-30 12:37:52.451336] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.877 [2024-10-30 12:37:52.451358] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.877 [2024-10-30 12:37:52.451372] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.877 [2024-10-30 12:37:52.454393] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.877 [2024-10-30 12:37:52.463648] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.877 [2024-10-30 12:37:52.464033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.877 [2024-10-30 12:37:52.464062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.877 [2024-10-30 12:37:52.464077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.877 [2024-10-30 12:37:52.464306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.877 [2024-10-30 12:37:52.464530] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.877 [2024-10-30 12:37:52.464567] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.877 [2024-10-30 12:37:52.464580] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.877 [2024-10-30 12:37:52.467551] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.877 [2024-10-30 12:37:52.476814] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.877 [2024-10-30 12:37:52.477231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.877 [2024-10-30 12:37:52.477284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.877 [2024-10-30 12:37:52.477302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.877 [2024-10-30 12:37:52.477549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.877 [2024-10-30 12:37:52.477765] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.877 [2024-10-30 12:37:52.477787] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.877 [2024-10-30 12:37:52.477801] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.877 [2024-10-30 12:37:52.480874] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.877 [2024-10-30 12:37:52.490157] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.877 [2024-10-30 12:37:52.490571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.877 [2024-10-30 12:37:52.490600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.877 [2024-10-30 12:37:52.490616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.877 [2024-10-30 12:37:52.490839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.877 [2024-10-30 12:37:52.491056] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.877 [2024-10-30 12:37:52.491078] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.877 [2024-10-30 12:37:52.491090] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.877 [2024-10-30 12:37:52.494116] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.877 [2024-10-30 12:37:52.503401] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.877 [2024-10-30 12:37:52.503837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.877 [2024-10-30 12:37:52.503866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.877 [2024-10-30 12:37:52.503883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.877 [2024-10-30 12:37:52.504127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.877 [2024-10-30 12:37:52.504370] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.877 [2024-10-30 12:37:52.504394] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.877 [2024-10-30 12:37:52.504409] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.877 [2024-10-30 12:37:52.507438] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.877 [2024-10-30 12:37:52.516696] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.877 [2024-10-30 12:37:52.517062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.877 [2024-10-30 12:37:52.517091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.877 [2024-10-30 12:37:52.517107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.877 [2024-10-30 12:37:52.517356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.877 [2024-10-30 12:37:52.517579] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.877 [2024-10-30 12:37:52.517600] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.877 [2024-10-30 12:37:52.517614] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.877 [2024-10-30 12:37:52.520586] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.877 [2024-10-30 12:37:52.530002] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.877 [2024-10-30 12:37:52.530386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.877 [2024-10-30 12:37:52.530415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.877 [2024-10-30 12:37:52.530431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.877 [2024-10-30 12:37:52.530655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.877 [2024-10-30 12:37:52.530866] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.877 [2024-10-30 12:37:52.530887] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.877 [2024-10-30 12:37:52.530905] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.877 [2024-10-30 12:37:52.533892] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.877 [2024-10-30 12:37:52.543334] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.877 [2024-10-30 12:37:52.543746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.877 [2024-10-30 12:37:52.543776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.877 [2024-10-30 12:37:52.543793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:19.877 [2024-10-30 12:37:52.544036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:19.877 [2024-10-30 12:37:52.544270] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:19.877 [2024-10-30 12:37:52.544292] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:19.878 [2024-10-30 12:37:52.544321] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:19.878 [2024-10-30 12:37:52.547345] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:19.878 [2024-10-30 12:37:52.556820] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:19.878 [2024-10-30 12:37:52.557141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.878 [2024-10-30 12:37:52.557170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:19.878 [2024-10-30 12:37:52.557187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.138 [2024-10-30 12:37:52.557457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.138 [2024-10-30 12:37:52.557675] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.138 [2024-10-30 12:37:52.557695] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.138 [2024-10-30 12:37:52.557708] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.138 [2024-10-30 12:37:52.560716] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.138 [2024-10-30 12:37:52.570030] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.138 [2024-10-30 12:37:52.570378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.138 [2024-10-30 12:37:52.570408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.138 [2024-10-30 12:37:52.570425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.138 [2024-10-30 12:37:52.570662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.138 [2024-10-30 12:37:52.570873] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.138 [2024-10-30 12:37:52.570894] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.138 [2024-10-30 12:37:52.570907] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.138 [2024-10-30 12:37:52.573915] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.138 [2024-10-30 12:37:52.583232] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.138 [2024-10-30 12:37:52.583615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.138 [2024-10-30 12:37:52.583644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.138 [2024-10-30 12:37:52.583660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.138 [2024-10-30 12:37:52.583900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.138 [2024-10-30 12:37:52.584094] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.138 [2024-10-30 12:37:52.584115] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.138 [2024-10-30 12:37:52.584127] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.138 [2024-10-30 12:37:52.587125] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.138 [2024-10-30 12:37:52.596631] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.138 [2024-10-30 12:37:52.597057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.138 [2024-10-30 12:37:52.597086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.138 [2024-10-30 12:37:52.597103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.139 [2024-10-30 12:37:52.597358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.139 [2024-10-30 12:37:52.597580] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.139 [2024-10-30 12:37:52.597602] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.139 [2024-10-30 12:37:52.597629] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.139 [2024-10-30 12:37:52.600587] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.139 [2024-10-30 12:37:52.609849] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.139 [2024-10-30 12:37:52.610179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.139 [2024-10-30 12:37:52.610208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.139 [2024-10-30 12:37:52.610224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.139 [2024-10-30 12:37:52.610493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.139 [2024-10-30 12:37:52.610732] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.139 [2024-10-30 12:37:52.610754] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.139 [2024-10-30 12:37:52.610767] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.139 [2024-10-30 12:37:52.613766] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.139 [2024-10-30 12:37:52.623022] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.139 [2024-10-30 12:37:52.623419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.139 [2024-10-30 12:37:52.623449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.139 [2024-10-30 12:37:52.623471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.139 [2024-10-30 12:37:52.623726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.139 [2024-10-30 12:37:52.623921] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.139 [2024-10-30 12:37:52.623942] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.139 [2024-10-30 12:37:52.623955] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.139 [2024-10-30 12:37:52.626935] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.139 [2024-10-30 12:37:52.636168] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.139 [2024-10-30 12:37:52.636545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.139 [2024-10-30 12:37:52.636575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.139 [2024-10-30 12:37:52.636592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.139 [2024-10-30 12:37:52.636838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.139 [2024-10-30 12:37:52.637032] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.139 [2024-10-30 12:37:52.637053] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.139 [2024-10-30 12:37:52.637066] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.139 [2024-10-30 12:37:52.640040] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.139 [2024-10-30 12:37:52.649522] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.139 [2024-10-30 12:37:52.649955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.139 [2024-10-30 12:37:52.649985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.139 [2024-10-30 12:37:52.650001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.139 [2024-10-30 12:37:52.650243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.139 [2024-10-30 12:37:52.650474] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.139 [2024-10-30 12:37:52.650497] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.139 [2024-10-30 12:37:52.650511] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.139 [2024-10-30 12:37:52.653485] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.139 [2024-10-30 12:37:52.662753] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.139 [2024-10-30 12:37:52.663170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.139 [2024-10-30 12:37:52.663199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.139 [2024-10-30 12:37:52.663216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.139 [2024-10-30 12:37:52.663456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.139 [2024-10-30 12:37:52.663700] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.139 [2024-10-30 12:37:52.663722] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.139 [2024-10-30 12:37:52.663735] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.139 [2024-10-30 12:37:52.666698] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.139 [2024-10-30 12:37:52.675945] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.139 [2024-10-30 12:37:52.676362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.139 [2024-10-30 12:37:52.676393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.139 [2024-10-30 12:37:52.676410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.139 [2024-10-30 12:37:52.676654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.139 [2024-10-30 12:37:52.676864] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.139 [2024-10-30 12:37:52.676886] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.139 [2024-10-30 12:37:52.676899] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.139 [2024-10-30 12:37:52.679938] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.139 [2024-10-30 12:37:52.689290] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.139 [2024-10-30 12:37:52.689609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.139 [2024-10-30 12:37:52.689638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.139 [2024-10-30 12:37:52.689654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.139 [2024-10-30 12:37:52.689879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.139 [2024-10-30 12:37:52.690088] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.139 [2024-10-30 12:37:52.690109] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.139 [2024-10-30 12:37:52.690122] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.139 [2024-10-30 12:37:52.693112] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.139 [2024-10-30 12:37:52.702664] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.139 [2024-10-30 12:37:52.703042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.139 [2024-10-30 12:37:52.703070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.139 [2024-10-30 12:37:52.703086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.139 [2024-10-30 12:37:52.703319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.139 [2024-10-30 12:37:52.703542] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.139 [2024-10-30 12:37:52.703563] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.139 [2024-10-30 12:37:52.703596] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.139 [2024-10-30 12:37:52.706572] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.139 [2024-10-30 12:37:52.715951] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.139 [2024-10-30 12:37:52.716337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.139 [2024-10-30 12:37:52.716367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.139 [2024-10-30 12:37:52.716383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.139 [2024-10-30 12:37:52.716625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.139 [2024-10-30 12:37:52.716835] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.139 [2024-10-30 12:37:52.716856] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.139 [2024-10-30 12:37:52.716868] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.139 [2024-10-30 12:37:52.719865] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.139 [2024-10-30 12:37:52.729143] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.139 [2024-10-30 12:37:52.729480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.139 [2024-10-30 12:37:52.729508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.140 [2024-10-30 12:37:52.729525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.140 [2024-10-30 12:37:52.729748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.140 [2024-10-30 12:37:52.729961] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.140 [2024-10-30 12:37:52.729981] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.140 [2024-10-30 12:37:52.729994] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.140 [2024-10-30 12:37:52.733251] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.140 [2024-10-30 12:37:52.742453] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.140 [2024-10-30 12:37:52.742823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.140 [2024-10-30 12:37:52.742852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.140 [2024-10-30 12:37:52.742868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.140 [2024-10-30 12:37:52.743104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.140 [2024-10-30 12:37:52.743359] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.140 [2024-10-30 12:37:52.743381] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.140 [2024-10-30 12:37:52.743395] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.140 [2024-10-30 12:37:52.746762] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.140 [2024-10-30 12:37:52.755799] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.140 [2024-10-30 12:37:52.756117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.140 [2024-10-30 12:37:52.756145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.140 [2024-10-30 12:37:52.756161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.140 [2024-10-30 12:37:52.756414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.140 [2024-10-30 12:37:52.756648] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.140 [2024-10-30 12:37:52.756669] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.140 [2024-10-30 12:37:52.756681] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.140 [2024-10-30 12:37:52.759642] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.140 [2024-10-30 12:37:52.769105] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.140 [2024-10-30 12:37:52.769488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.140 [2024-10-30 12:37:52.769517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.140 [2024-10-30 12:37:52.769533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.140 [2024-10-30 12:37:52.769766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.140 [2024-10-30 12:37:52.769961] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.140 [2024-10-30 12:37:52.769982] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.140 [2024-10-30 12:37:52.769994] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.140 [2024-10-30 12:37:52.772991] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.140 [2024-10-30 12:37:52.782326] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.140 [2024-10-30 12:37:52.782706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.140 [2024-10-30 12:37:52.782735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.140 [2024-10-30 12:37:52.782750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.140 [2024-10-30 12:37:52.782975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.140 [2024-10-30 12:37:52.783186] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.140 [2024-10-30 12:37:52.783206] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.140 [2024-10-30 12:37:52.783219] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.140 [2024-10-30 12:37:52.786213] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.140 [2024-10-30 12:37:52.795780] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.140 [2024-10-30 12:37:52.796164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.140 [2024-10-30 12:37:52.796192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.140 [2024-10-30 12:37:52.796213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.140 [2024-10-30 12:37:52.796467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.140 [2024-10-30 12:37:52.796715] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.140 [2024-10-30 12:37:52.796736] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.140 [2024-10-30 12:37:52.796748] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.140 [2024-10-30 12:37:52.799663] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.140 [2024-10-30 12:37:52.809028] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.140 [2024-10-30 12:37:52.809416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.140 [2024-10-30 12:37:52.809447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.140 [2024-10-30 12:37:52.809464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.140 [2024-10-30 12:37:52.809714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.140 [2024-10-30 12:37:52.809909] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.140 [2024-10-30 12:37:52.809930] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.140 [2024-10-30 12:37:52.809942] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.140 [2024-10-30 12:37:52.812953] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.400 [2024-10-30 12:37:52.822500] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.400 [2024-10-30 12:37:52.822930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.400 [2024-10-30 12:37:52.822959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.400 [2024-10-30 12:37:52.822975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.400 [2024-10-30 12:37:52.823218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.823456] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.823477] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.823490] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.401 [2024-10-30 12:37:52.826586] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.401 [2024-10-30 12:37:52.835878] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.401 [2024-10-30 12:37:52.836242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.401 [2024-10-30 12:37:52.836292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.401 [2024-10-30 12:37:52.836309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.401 [2024-10-30 12:37:52.836544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.836760] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.836780] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.836792] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.401 [2024-10-30 12:37:52.839794] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.401 [2024-10-30 12:37:52.849102] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.401 [2024-10-30 12:37:52.849479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.401 [2024-10-30 12:37:52.849508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.401 [2024-10-30 12:37:52.849524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.401 [2024-10-30 12:37:52.849771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.849965] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.849985] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.849998] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.401 [2024-10-30 12:37:52.853120] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.401 [2024-10-30 12:37:52.862429] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.401 [2024-10-30 12:37:52.862890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.401 [2024-10-30 12:37:52.862920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.401 [2024-10-30 12:37:52.862937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.401 [2024-10-30 12:37:52.863180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.863434] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.863456] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.863470] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.401 [2024-10-30 12:37:52.866510] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.401 [2024-10-30 12:37:52.875816] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.401 [2024-10-30 12:37:52.876201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.401 [2024-10-30 12:37:52.876229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.401 [2024-10-30 12:37:52.876268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.401 [2024-10-30 12:37:52.876501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.876731] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.876753] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.876772] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.401 [2024-10-30 12:37:52.879858] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.401 [2024-10-30 12:37:52.889115] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.401 [2024-10-30 12:37:52.889485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.401 [2024-10-30 12:37:52.889514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.401 [2024-10-30 12:37:52.889531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.401 [2024-10-30 12:37:52.889769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.889979] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.890000] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.890013] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.401 [2024-10-30 12:37:52.893113] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.401 [2024-10-30 12:37:52.902483] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.401 [2024-10-30 12:37:52.902798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.401 [2024-10-30 12:37:52.902840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.401 [2024-10-30 12:37:52.902857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.401 [2024-10-30 12:37:52.903075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.903329] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.903352] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.903366] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.401 [2024-10-30 12:37:52.906326] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.401 [2024-10-30 12:37:52.915774] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.401 [2024-10-30 12:37:52.916128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.401 [2024-10-30 12:37:52.916157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.401 [2024-10-30 12:37:52.916174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.401 [2024-10-30 12:37:52.916414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.916648] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.916669] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.916682] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.401 [2024-10-30 12:37:52.919634] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.401 [2024-10-30 12:37:52.929128] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.401 [2024-10-30 12:37:52.929506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.401 [2024-10-30 12:37:52.929536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.401 [2024-10-30 12:37:52.929552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.401 [2024-10-30 12:37:52.929795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.930004] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.930026] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.930039] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.401 [2024-10-30 12:37:52.933032] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.401 [2024-10-30 12:37:52.942516] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.401 [2024-10-30 12:37:52.942860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.401 [2024-10-30 12:37:52.942889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.401 [2024-10-30 12:37:52.942905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.401 [2024-10-30 12:37:52.943135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.943400] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.943424] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.943438] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.401 [2024-10-30 12:37:52.946504] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.401 [2024-10-30 12:37:52.955800] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.401 [2024-10-30 12:37:52.956183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.401 [2024-10-30 12:37:52.956212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.401 [2024-10-30 12:37:52.956229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.401 [2024-10-30 12:37:52.956483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.401 [2024-10-30 12:37:52.956717] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.401 [2024-10-30 12:37:52.956738] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.401 [2024-10-30 12:37:52.956751] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.402 [2024-10-30 12:37:52.959761] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.402 [2024-10-30 12:37:52.969091] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.402 [2024-10-30 12:37:52.969485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.402 [2024-10-30 12:37:52.969519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.402 [2024-10-30 12:37:52.969541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.402 [2024-10-30 12:37:52.969765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.402 [2024-10-30 12:37:52.969982] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.402 [2024-10-30 12:37:52.970003] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.402 [2024-10-30 12:37:52.970016] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.402 [2024-10-30 12:37:52.973091] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.402 [2024-10-30 12:37:52.982561] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.402 [2024-10-30 12:37:52.982895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.402 [2024-10-30 12:37:52.982924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.402 [2024-10-30 12:37:52.982941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.402 [2024-10-30 12:37:52.983164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.402 [2024-10-30 12:37:52.983423] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.402 [2024-10-30 12:37:52.983447] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.402 [2024-10-30 12:37:52.983461] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.402 [2024-10-30 12:37:52.986621] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.402 [2024-10-30 12:37:52.996062] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.402 [2024-10-30 12:37:52.996717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.402 [2024-10-30 12:37:52.996747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.402 [2024-10-30 12:37:52.996763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.402 [2024-10-30 12:37:52.997012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.402 [2024-10-30 12:37:52.997206] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.402 [2024-10-30 12:37:52.997227] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.402 [2024-10-30 12:37:52.997263] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.402 [2024-10-30 12:37:53.000362] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.402 [2024-10-30 12:37:53.009450] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.402 [2024-10-30 12:37:53.009899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.402 [2024-10-30 12:37:53.009928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.402 [2024-10-30 12:37:53.009945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.402 [2024-10-30 12:37:53.010187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.402 [2024-10-30 12:37:53.010439] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.402 [2024-10-30 12:37:53.010463] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.402 [2024-10-30 12:37:53.010478] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.402 [2024-10-30 12:37:53.013626] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.402 [2024-10-30 12:37:53.022682] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.402 [2024-10-30 12:37:53.023103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.402 [2024-10-30 12:37:53.023133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.402 [2024-10-30 12:37:53.023150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.402 [2024-10-30 12:37:53.023401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.402 [2024-10-30 12:37:53.023617] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.402 [2024-10-30 12:37:53.023638] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.402 [2024-10-30 12:37:53.023651] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.402 [2024-10-30 12:37:53.026612] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.402 [2024-10-30 12:37:53.035869] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.402 [2024-10-30 12:37:53.036285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.402 [2024-10-30 12:37:53.036315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.402 [2024-10-30 12:37:53.036332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.402 [2024-10-30 12:37:53.036569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.402 [2024-10-30 12:37:53.036781] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.402 [2024-10-30 12:37:53.036802] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.402 [2024-10-30 12:37:53.036816] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.402 [2024-10-30 12:37:53.039787] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.402 [2024-10-30 12:37:53.049164] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.402 [2024-10-30 12:37:53.049540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.402 [2024-10-30 12:37:53.049569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.402 [2024-10-30 12:37:53.049585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.402 [2024-10-30 12:37:53.049820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.402 [2024-10-30 12:37:53.050013] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.402 [2024-10-30 12:37:53.050033] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.402 [2024-10-30 12:37:53.050051] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.402 [2024-10-30 12:37:53.053006] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.402 [2024-10-30 12:37:53.062408] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.402 [2024-10-30 12:37:53.062747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.402 [2024-10-30 12:37:53.062776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.402 [2024-10-30 12:37:53.062792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.402 [2024-10-30 12:37:53.063014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.402 [2024-10-30 12:37:53.063224] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.402 [2024-10-30 12:37:53.063245] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.402 [2024-10-30 12:37:53.063281] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.402 [2024-10-30 12:37:53.066227] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.402 [2024-10-30 12:37:53.075656] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.402 [2024-10-30 12:37:53.076078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.402 [2024-10-30 12:37:53.076107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.402 [2024-10-30 12:37:53.076124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.402 [2024-10-30 12:37:53.076380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.402 [2024-10-30 12:37:53.076610] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.402 [2024-10-30 12:37:53.076631] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.402 [2024-10-30 12:37:53.076644] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.402 [2024-10-30 12:37:53.079736] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.664 [2024-10-30 12:37:53.088999] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.664 [2024-10-30 12:37:53.089358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-10-30 12:37:53.089388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.664 [2024-10-30 12:37:53.089420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.664 [2024-10-30 12:37:53.089664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.664 [2024-10-30 12:37:53.089890] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.664 [2024-10-30 12:37:53.089911] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.664 [2024-10-30 12:37:53.089924] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.664 [2024-10-30 12:37:53.092898] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.664 [2024-10-30 12:37:53.102195] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.664 [2024-10-30 12:37:53.102627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-10-30 12:37:53.102657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.664 [2024-10-30 12:37:53.102673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.664 [2024-10-30 12:37:53.102916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.664 [2024-10-30 12:37:53.103126] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.664 [2024-10-30 12:37:53.103147] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.664 [2024-10-30 12:37:53.103159] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.664 [2024-10-30 12:37:53.106156] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.664 [2024-10-30 12:37:53.115373] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.664 [2024-10-30 12:37:53.115819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-10-30 12:37:53.115849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.664 [2024-10-30 12:37:53.115865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.664 [2024-10-30 12:37:53.116108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.664 [2024-10-30 12:37:53.116329] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.664 [2024-10-30 12:37:53.116352] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.664 [2024-10-30 12:37:53.116366] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.664 [2024-10-30 12:37:53.119359] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.664 [2024-10-30 12:37:53.128604] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.664 [2024-10-30 12:37:53.128978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.664 [2024-10-30 12:37:53.129007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.664 [2024-10-30 12:37:53.129023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.665 [2024-10-30 12:37:53.129245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.665 [2024-10-30 12:37:53.129472] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.665 [2024-10-30 12:37:53.129492] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.665 [2024-10-30 12:37:53.129505] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.665 [2024-10-30 12:37:53.132416] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.665 [2024-10-30 12:37:53.141813] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.665 [2024-10-30 12:37:53.142167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-10-30 12:37:53.142205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.665 [2024-10-30 12:37:53.142222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.665 [2024-10-30 12:37:53.142487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.665 [2024-10-30 12:37:53.142700] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.665 [2024-10-30 12:37:53.142721] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.665 [2024-10-30 12:37:53.142734] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.665 [2024-10-30 12:37:53.145651] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.665 [2024-10-30 12:37:53.154989] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.665 [2024-10-30 12:37:53.155411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-10-30 12:37:53.155441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.665 [2024-10-30 12:37:53.155458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.665 [2024-10-30 12:37:53.155698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.665 [2024-10-30 12:37:53.155908] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.665 [2024-10-30 12:37:53.155929] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.665 [2024-10-30 12:37:53.155942] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.665 [2024-10-30 12:37:53.158886] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.665 [2024-10-30 12:37:53.168228] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.665 [2024-10-30 12:37:53.168651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-10-30 12:37:53.168682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.665 [2024-10-30 12:37:53.168699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.665 [2024-10-30 12:37:53.168943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.665 [2024-10-30 12:37:53.169154] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.665 [2024-10-30 12:37:53.169176] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.665 [2024-10-30 12:37:53.169189] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.665 [2024-10-30 12:37:53.172195] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.665 [2024-10-30 12:37:53.181469] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.665 [2024-10-30 12:37:53.181868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-10-30 12:37:53.181896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.665 [2024-10-30 12:37:53.181912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.665 [2024-10-30 12:37:53.182135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.665 [2024-10-30 12:37:53.182397] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.665 [2024-10-30 12:37:53.182420] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.665 [2024-10-30 12:37:53.182434] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.665 [2024-10-30 12:37:53.185368] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.665 [2024-10-30 12:37:53.194881] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.665 [2024-10-30 12:37:53.195232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-10-30 12:37:53.195281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.665 [2024-10-30 12:37:53.195300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.665 [2024-10-30 12:37:53.195543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.665 [2024-10-30 12:37:53.195788] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.665 [2024-10-30 12:37:53.195807] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.665 [2024-10-30 12:37:53.195819] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.665 [2024-10-30 12:37:53.198797] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.665 [2024-10-30 12:37:53.208163] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.665 [2024-10-30 12:37:53.208534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-10-30 12:37:53.208564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.665 [2024-10-30 12:37:53.208582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.665 [2024-10-30 12:37:53.208836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.665 [2024-10-30 12:37:53.209031] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.665 [2024-10-30 12:37:53.209053] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.665 [2024-10-30 12:37:53.209066] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.665 [2024-10-30 12:37:53.212143] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.665 [2024-10-30 12:37:53.221523] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.665 [2024-10-30 12:37:53.221891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-10-30 12:37:53.221919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.665 [2024-10-30 12:37:53.221935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.665 [2024-10-30 12:37:53.222171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.665 [2024-10-30 12:37:53.222410] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.665 [2024-10-30 12:37:53.222431] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.665 [2024-10-30 12:37:53.222450] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.665 [2024-10-30 12:37:53.225440] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.665 [2024-10-30 12:37:53.234745] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.665 [2024-10-30 12:37:53.235174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.665 [2024-10-30 12:37:53.235201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.665 [2024-10-30 12:37:53.235217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.665 [2024-10-30 12:37:53.235462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.665 [2024-10-30 12:37:53.235675] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.666 [2024-10-30 12:37:53.235696] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.666 [2024-10-30 12:37:53.235708] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.666 [2024-10-30 12:37:53.238675] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.666 [2024-10-30 12:37:53.248087] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.666 [2024-10-30 12:37:53.248468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-10-30 12:37:53.248497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.666 [2024-10-30 12:37:53.248514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.666 [2024-10-30 12:37:53.248767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.666 [2024-10-30 12:37:53.248976] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.666 [2024-10-30 12:37:53.248997] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.666 [2024-10-30 12:37:53.249010] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.666 [2024-10-30 12:37:53.251976] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.666 [2024-10-30 12:37:53.261340] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.666 [2024-10-30 12:37:53.261714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-10-30 12:37:53.261742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.666 [2024-10-30 12:37:53.261758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.666 [2024-10-30 12:37:53.261981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.666 [2024-10-30 12:37:53.262192] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.666 [2024-10-30 12:37:53.262212] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.666 [2024-10-30 12:37:53.262224] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.666 [2024-10-30 12:37:53.265228] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.666 [2024-10-30 12:37:53.274582] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.666 [2024-10-30 12:37:53.274964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-10-30 12:37:53.274994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.666 [2024-10-30 12:37:53.275011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.666 [2024-10-30 12:37:53.275236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.666 [2024-10-30 12:37:53.275476] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.666 [2024-10-30 12:37:53.275497] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.666 [2024-10-30 12:37:53.275511] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.666 [2024-10-30 12:37:53.278473] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.666 [2024-10-30 12:37:53.287893] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.666 [2024-10-30 12:37:53.288317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-10-30 12:37:53.288347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.666 [2024-10-30 12:37:53.288364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.666 [2024-10-30 12:37:53.288605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.666 [2024-10-30 12:37:53.288799] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.666 [2024-10-30 12:37:53.288821] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.666 [2024-10-30 12:37:53.288833] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.666 [2024-10-30 12:37:53.291789] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.666 [2024-10-30 12:37:53.301117] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.666 [2024-10-30 12:37:53.301533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-10-30 12:37:53.301561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.666 [2024-10-30 12:37:53.301576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.666 [2024-10-30 12:37:53.301794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.666 [2024-10-30 12:37:53.302004] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.666 [2024-10-30 12:37:53.302025] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.666 [2024-10-30 12:37:53.302038] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.666 [2024-10-30 12:37:53.305077] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.666 [2024-10-30 12:37:53.314439] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.666 [2024-10-30 12:37:53.314794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-10-30 12:37:53.314867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.666 [2024-10-30 12:37:53.314883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.666 [2024-10-30 12:37:53.315093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.666 [2024-10-30 12:37:53.315340] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.666 [2024-10-30 12:37:53.315362] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.666 [2024-10-30 12:37:53.315377] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.666 5613.00 IOPS, 21.93 MiB/s [2024-10-30T11:37:53.347Z] [2024-10-30 12:37:53.319735] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.666 [2024-10-30 12:37:53.327698] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.666 [2024-10-30 12:37:53.328178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-10-30 12:37:53.328230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.666 [2024-10-30 12:37:53.328247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.666 [2024-10-30 12:37:53.328522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.666 [2024-10-30 12:37:53.328728] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.666 [2024-10-30 12:37:53.328749] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.666 [2024-10-30 12:37:53.328761] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.666 [2024-10-30 12:37:53.331769] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.666 [2024-10-30 12:37:53.340850] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.666 [2024-10-30 12:37:53.341293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.666 [2024-10-30 12:37:53.341331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.666 [2024-10-30 12:37:53.341348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.666 [2024-10-30 12:37:53.341589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.666 [2024-10-30 12:37:53.341795] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.666 [2024-10-30 12:37:53.341817] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.666 [2024-10-30 12:37:53.341845] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.667 [2024-10-30 12:37:53.344873] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.927 [2024-10-30 12:37:53.354182] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.927 [2024-10-30 12:37:53.354573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.927 [2024-10-30 12:37:53.354601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.927 [2024-10-30 12:37:53.354617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.927 [2024-10-30 12:37:53.354839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.927 [2024-10-30 12:37:53.355043] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.927 [2024-10-30 12:37:53.355065] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.927 [2024-10-30 12:37:53.355078] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.927 [2024-10-30 12:37:53.358003] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.927 [2024-10-30 12:37:53.367379] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.927 [2024-10-30 12:37:53.367691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.927 [2024-10-30 12:37:53.367764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.927 [2024-10-30 12:37:53.367781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.927 [2024-10-30 12:37:53.368013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.927 [2024-10-30 12:37:53.368217] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.927 [2024-10-30 12:37:53.368252] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.927 [2024-10-30 12:37:53.368274] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.927 [2024-10-30 12:37:53.371048] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.927 [2024-10-30 12:37:53.380561] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.927 [2024-10-30 12:37:53.381012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.927 [2024-10-30 12:37:53.381040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.927 [2024-10-30 12:37:53.381056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.927 [2024-10-30 12:37:53.381312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.927 [2024-10-30 12:37:53.381505] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.927 [2024-10-30 12:37:53.381527] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.927 [2024-10-30 12:37:53.381541] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.927 [2024-10-30 12:37:53.384357] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.927 [2024-10-30 12:37:53.393692] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.927 [2024-10-30 12:37:53.394148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.927 [2024-10-30 12:37:53.394202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.927 [2024-10-30 12:37:53.394218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.927 [2024-10-30 12:37:53.394488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.927 [2024-10-30 12:37:53.394715] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.927 [2024-10-30 12:37:53.394740] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.927 [2024-10-30 12:37:53.394753] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.927 [2024-10-30 12:37:53.397626] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.927 [2024-10-30 12:37:53.406736] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.928 [2024-10-30 12:37:53.407080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.928 [2024-10-30 12:37:53.407108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.928 [2024-10-30 12:37:53.407124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.928 [2024-10-30 12:37:53.407364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.928 [2024-10-30 12:37:53.407594] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.928 [2024-10-30 12:37:53.407614] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.928 [2024-10-30 12:37:53.407627] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.928 [2024-10-30 12:37:53.410505] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.928 [2024-10-30 12:37:53.419819] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.928 [2024-10-30 12:37:53.420166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.928 [2024-10-30 12:37:53.420194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.928 [2024-10-30 12:37:53.420211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.928 [2024-10-30 12:37:53.420472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.928 [2024-10-30 12:37:53.420697] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.928 [2024-10-30 12:37:53.420718] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.928 [2024-10-30 12:37:53.420730] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.928 [2024-10-30 12:37:53.423601] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.928 [2024-10-30 12:37:53.432928] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.928 [2024-10-30 12:37:53.433344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.928 [2024-10-30 12:37:53.433374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.928 [2024-10-30 12:37:53.433391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.928 [2024-10-30 12:37:53.433642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.928 [2024-10-30 12:37:53.433847] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.928 [2024-10-30 12:37:53.433867] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.928 [2024-10-30 12:37:53.433880] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.928 [2024-10-30 12:37:53.436717] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.928 [2024-10-30 12:37:53.446183] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.928 [2024-10-30 12:37:53.446528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.928 [2024-10-30 12:37:53.446558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.928 [2024-10-30 12:37:53.446575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.928 [2024-10-30 12:37:53.446800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.928 [2024-10-30 12:37:53.447008] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.928 [2024-10-30 12:37:53.447030] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.928 [2024-10-30 12:37:53.447042] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.928 [2024-10-30 12:37:53.450043] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.928 [2024-10-30 12:37:53.459372] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.928 [2024-10-30 12:37:53.459778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.928 [2024-10-30 12:37:53.459807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.928 [2024-10-30 12:37:53.459823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.928 [2024-10-30 12:37:53.460059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.928 [2024-10-30 12:37:53.460290] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.928 [2024-10-30 12:37:53.460312] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.928 [2024-10-30 12:37:53.460324] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.928 [2024-10-30 12:37:53.463209] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.928 [2024-10-30 12:37:53.472390] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.928 [2024-10-30 12:37:53.472765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.928 [2024-10-30 12:37:53.472793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.928 [2024-10-30 12:37:53.472809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.928 [2024-10-30 12:37:53.473024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.928 [2024-10-30 12:37:53.473228] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.928 [2024-10-30 12:37:53.473247] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.928 [2024-10-30 12:37:53.473285] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.928 [2024-10-30 12:37:53.476178] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.928 [2024-10-30 12:37:53.485471] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.928 [2024-10-30 12:37:53.485881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.928 [2024-10-30 12:37:53.485919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.928 [2024-10-30 12:37:53.485936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.928 [2024-10-30 12:37:53.486170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.928 [2024-10-30 12:37:53.486420] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.928 [2024-10-30 12:37:53.486442] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.928 [2024-10-30 12:37:53.486455] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.928 [2024-10-30 12:37:53.489431] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.928 [2024-10-30 12:37:53.498865] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.928 [2024-10-30 12:37:53.499176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.928 [2024-10-30 12:37:53.499205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.928 [2024-10-30 12:37:53.499221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.928 [2024-10-30 12:37:53.499468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.928 [2024-10-30 12:37:53.499696] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.928 [2024-10-30 12:37:53.499717] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.928 [2024-10-30 12:37:53.499731] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.928 [2024-10-30 12:37:53.502675] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.928 [2024-10-30 12:37:53.512010] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.928 [2024-10-30 12:37:53.512415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.928 [2024-10-30 12:37:53.512444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.928 [2024-10-30 12:37:53.512461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.928 [2024-10-30 12:37:53.512698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.929 [2024-10-30 12:37:53.512903] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.929 [2024-10-30 12:37:53.512924] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.929 [2024-10-30 12:37:53.512937] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.929 [2024-10-30 12:37:53.515811] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.929 [2024-10-30 12:37:53.525170] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.929 [2024-10-30 12:37:53.525587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.929 [2024-10-30 12:37:53.525616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.929 [2024-10-30 12:37:53.525631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.929 [2024-10-30 12:37:53.525866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.929 [2024-10-30 12:37:53.526070] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.929 [2024-10-30 12:37:53.526089] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.929 [2024-10-30 12:37:53.526103] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.929 [2024-10-30 12:37:53.529019] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.929 [2024-10-30 12:37:53.538219] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.929 [2024-10-30 12:37:53.538581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.929 [2024-10-30 12:37:53.538624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.929 [2024-10-30 12:37:53.538640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.929 [2024-10-30 12:37:53.538856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.929 [2024-10-30 12:37:53.539059] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.929 [2024-10-30 12:37:53.539090] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.929 [2024-10-30 12:37:53.539103] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.929 [2024-10-30 12:37:53.541978] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.929 [2024-10-30 12:37:53.551308] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.929 [2024-10-30 12:37:53.551712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.929 [2024-10-30 12:37:53.551739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.929 [2024-10-30 12:37:53.551755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.929 [2024-10-30 12:37:53.551990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.929 [2024-10-30 12:37:53.552194] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.929 [2024-10-30 12:37:53.552215] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.929 [2024-10-30 12:37:53.552228] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.929 [2024-10-30 12:37:53.555104] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.929 [2024-10-30 12:37:53.564314] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.929 [2024-10-30 12:37:53.564660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.929 [2024-10-30 12:37:53.564688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.929 [2024-10-30 12:37:53.564705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.929 [2024-10-30 12:37:53.564938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.929 [2024-10-30 12:37:53.565150] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.929 [2024-10-30 12:37:53.565177] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.929 [2024-10-30 12:37:53.565191] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.929 [2024-10-30 12:37:53.568106] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.929 [2024-10-30 12:37:53.577274] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.929 [2024-10-30 12:37:53.577616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.929 [2024-10-30 12:37:53.577645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.929 [2024-10-30 12:37:53.577661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.929 [2024-10-30 12:37:53.577878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.929 [2024-10-30 12:37:53.578082] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.929 [2024-10-30 12:37:53.578103] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.929 [2024-10-30 12:37:53.578116] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.929 [2024-10-30 12:37:53.581065] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.929 [2024-10-30 12:37:53.590226] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.929 [2024-10-30 12:37:53.590575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.929 [2024-10-30 12:37:53.590604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.929 [2024-10-30 12:37:53.590621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.929 [2024-10-30 12:37:53.590859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.929 [2024-10-30 12:37:53.591064] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.929 [2024-10-30 12:37:53.591085] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.929 [2024-10-30 12:37:53.591098] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.929 [2024-10-30 12:37:53.594027] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.929 [2024-10-30 12:37:53.603396] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.929 [2024-10-30 12:37:53.603803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.929 [2024-10-30 12:37:53.603832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:20.929 [2024-10-30 12:37:53.603848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:20.929 [2024-10-30 12:37:53.604082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:20.929 [2024-10-30 12:37:53.604330] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.929 [2024-10-30 12:37:53.604352] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.929 [2024-10-30 12:37:53.604366] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.929 [2024-10-30 12:37:53.607422] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.189 [2024-10-30 12:37:53.616609] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.189 [2024-10-30 12:37:53.616954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.189 [2024-10-30 12:37:53.616996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.189 [2024-10-30 12:37:53.617012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.189 [2024-10-30 12:37:53.617231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.189 [2024-10-30 12:37:53.617463] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.189 [2024-10-30 12:37:53.617485] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.189 [2024-10-30 12:37:53.617499] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.189 [2024-10-30 12:37:53.620371] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.189 [2024-10-30 12:37:53.629654] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.189 [2024-10-30 12:37:53.629958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.189 [2024-10-30 12:37:53.630001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.189 [2024-10-30 12:37:53.630018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.189 [2024-10-30 12:37:53.630235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.189 [2024-10-30 12:37:53.630469] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.189 [2024-10-30 12:37:53.630491] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.630503] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.633374] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.642732] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.643140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.643170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.190 [2024-10-30 12:37:53.643186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.190 [2024-10-30 12:37:53.643431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.190 [2024-10-30 12:37:53.643642] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.190 [2024-10-30 12:37:53.643661] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.643674] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.646533] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.655836] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.656207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.656238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.190 [2024-10-30 12:37:53.656264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.190 [2024-10-30 12:37:53.656521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.190 [2024-10-30 12:37:53.656727] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.190 [2024-10-30 12:37:53.656747] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.656759] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.659515] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.668811] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.669184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.669212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.190 [2024-10-30 12:37:53.669228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.190 [2024-10-30 12:37:53.669475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.190 [2024-10-30 12:37:53.669695] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.190 [2024-10-30 12:37:53.669716] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.669729] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.672600] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.681921] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.682312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.682341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.190 [2024-10-30 12:37:53.682357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.190 [2024-10-30 12:37:53.682604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.190 [2024-10-30 12:37:53.682792] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.190 [2024-10-30 12:37:53.682811] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.682824] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.685737] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.695056] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.695408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.695437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.190 [2024-10-30 12:37:53.695454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.190 [2024-10-30 12:37:53.695695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.190 [2024-10-30 12:37:53.695899] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.190 [2024-10-30 12:37:53.695919] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.695932] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.698847] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.708213] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.708586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.708615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.190 [2024-10-30 12:37:53.708631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.190 [2024-10-30 12:37:53.708865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.190 [2024-10-30 12:37:53.709068] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.190 [2024-10-30 12:37:53.709089] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.709101] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.712017] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.721197] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.721607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.721635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.190 [2024-10-30 12:37:53.721651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.190 [2024-10-30 12:37:53.721881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.190 [2024-10-30 12:37:53.722085] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.190 [2024-10-30 12:37:53.722106] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.722118] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.724995] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.734310] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.734724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.734753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.190 [2024-10-30 12:37:53.734769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.190 [2024-10-30 12:37:53.735005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.190 [2024-10-30 12:37:53.735208] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.190 [2024-10-30 12:37:53.735234] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.735247] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.738198] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.747521] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.747890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.747919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.190 [2024-10-30 12:37:53.747936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.190 [2024-10-30 12:37:53.748184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.190 [2024-10-30 12:37:53.748410] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.190 [2024-10-30 12:37:53.748433] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.748446] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.751473] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.760592] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.760964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.760992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.190 [2024-10-30 12:37:53.761008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.190 [2024-10-30 12:37:53.761227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.190 [2024-10-30 12:37:53.761459] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.190 [2024-10-30 12:37:53.761481] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.190 [2024-10-30 12:37:53.761494] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.190 [2024-10-30 12:37:53.764345] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.190 [2024-10-30 12:37:53.773755] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.190 [2024-10-30 12:37:53.774101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.190 [2024-10-30 12:37:53.774131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.191 [2024-10-30 12:37:53.774148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.191 [2024-10-30 12:37:53.774384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.191 [2024-10-30 12:37:53.774614] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.191 [2024-10-30 12:37:53.774634] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.191 [2024-10-30 12:37:53.774647] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.191 [2024-10-30 12:37:53.777512] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.191 [2024-10-30 12:37:53.786840] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.191 [2024-10-30 12:37:53.787149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.191 [2024-10-30 12:37:53.787178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.191 [2024-10-30 12:37:53.787194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.191 [2024-10-30 12:37:53.787472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.191 [2024-10-30 12:37:53.787697] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.191 [2024-10-30 12:37:53.787717] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.191 [2024-10-30 12:37:53.787729] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.191 [2024-10-30 12:37:53.790525] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.191 [2024-10-30 12:37:53.799887] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.191 [2024-10-30 12:37:53.800227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.191 [2024-10-30 12:37:53.800265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.191 [2024-10-30 12:37:53.800300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.191 [2024-10-30 12:37:53.800542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.191 [2024-10-30 12:37:53.800762] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.191 [2024-10-30 12:37:53.800783] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.191 [2024-10-30 12:37:53.800796] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.191 [2024-10-30 12:37:53.803673] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.191 [2024-10-30 12:37:53.813003] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.191 [2024-10-30 12:37:53.813347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.191 [2024-10-30 12:37:53.813376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.191 [2024-10-30 12:37:53.813393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.191 [2024-10-30 12:37:53.813628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.191 [2024-10-30 12:37:53.813832] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.191 [2024-10-30 12:37:53.813852] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.191 [2024-10-30 12:37:53.813865] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.191 [2024-10-30 12:37:53.816673] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.191 [2024-10-30 12:37:53.826120] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.191 [2024-10-30 12:37:53.826532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.191 [2024-10-30 12:37:53.826566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.191 [2024-10-30 12:37:53.826582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.191 [2024-10-30 12:37:53.826818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.191 [2024-10-30 12:37:53.827006] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.191 [2024-10-30 12:37:53.827026] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.191 [2024-10-30 12:37:53.827038] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.191 [2024-10-30 12:37:53.829995] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.191 [2024-10-30 12:37:53.839302] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.191 [2024-10-30 12:37:53.839709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.191 [2024-10-30 12:37:53.839739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.191 [2024-10-30 12:37:53.839755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.191 [2024-10-30 12:37:53.839990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.191 [2024-10-30 12:37:53.840194] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.191 [2024-10-30 12:37:53.840215] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.191 [2024-10-30 12:37:53.840228] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.191 [2024-10-30 12:37:53.843141] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.191 [2024-10-30 12:37:53.852312] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.191 [2024-10-30 12:37:53.852620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.191 [2024-10-30 12:37:53.852649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.191 [2024-10-30 12:37:53.852665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.191 [2024-10-30 12:37:53.852882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.191 [2024-10-30 12:37:53.853087] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.191 [2024-10-30 12:37:53.853108] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.191 [2024-10-30 12:37:53.853120] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.191 [2024-10-30 12:37:53.856020] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.191 [2024-10-30 12:37:53.865570] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.191 [2024-10-30 12:37:53.865993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.191 [2024-10-30 12:37:53.866021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.191 [2024-10-30 12:37:53.866036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.191 [2024-10-30 12:37:53.866284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.191 [2024-10-30 12:37:53.866491] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.191 [2024-10-30 12:37:53.866518] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.191 [2024-10-30 12:37:53.866532] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.191 [2024-10-30 12:37:53.869563] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.452 [2024-10-30 12:37:53.878983] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.452 [2024-10-30 12:37:53.879331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-10-30 12:37:53.879359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.452 [2024-10-30 12:37:53.879377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.452 [2024-10-30 12:37:53.879612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.452 [2024-10-30 12:37:53.879807] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.452 [2024-10-30 12:37:53.879827] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.452 [2024-10-30 12:37:53.879840] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.452 [2024-10-30 12:37:53.882852] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.452 [2024-10-30 12:37:53.892117] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.452 [2024-10-30 12:37:53.892496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-10-30 12:37:53.892524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.452 [2024-10-30 12:37:53.892540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.452 [2024-10-30 12:37:53.892756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.452 [2024-10-30 12:37:53.892962] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.452 [2024-10-30 12:37:53.892982] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.452 [2024-10-30 12:37:53.892994] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.452 [2024-10-30 12:37:53.896240] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.452 [2024-10-30 12:37:53.905287] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.452 [2024-10-30 12:37:53.905713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-10-30 12:37:53.905741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.452 [2024-10-30 12:37:53.905756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.452 [2024-10-30 12:37:53.905992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.452 [2024-10-30 12:37:53.906197] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.452 [2024-10-30 12:37:53.906221] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.452 [2024-10-30 12:37:53.906234] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.452 [2024-10-30 12:37:53.909151] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.452 [2024-10-30 12:37:53.918409] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.452 [2024-10-30 12:37:53.918741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-10-30 12:37:53.918768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.452 [2024-10-30 12:37:53.918784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.452 [2024-10-30 12:37:53.919008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.452 [2024-10-30 12:37:53.919214] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.452 [2024-10-30 12:37:53.919243] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.452 [2024-10-30 12:37:53.919264] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.452 [2024-10-30 12:37:53.922133] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.452 [2024-10-30 12:37:53.931664] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.452 [2024-10-30 12:37:53.932017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-10-30 12:37:53.932058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.452 [2024-10-30 12:37:53.932074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.452 [2024-10-30 12:37:53.932320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.452 [2024-10-30 12:37:53.932531] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.452 [2024-10-30 12:37:53.932552] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.452 [2024-10-30 12:37:53.932566] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.452 [2024-10-30 12:37:53.935442] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.452 [2024-10-30 12:37:53.944750] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.452 [2024-10-30 12:37:53.945092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-10-30 12:37:53.945120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.452 [2024-10-30 12:37:53.945137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.452 [2024-10-30 12:37:53.945384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.452 [2024-10-30 12:37:53.945595] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.452 [2024-10-30 12:37:53.945615] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.452 [2024-10-30 12:37:53.945627] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.452 [2024-10-30 12:37:53.948593] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.452 [2024-10-30 12:37:53.957914] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.452 [2024-10-30 12:37:53.958265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-10-30 12:37:53.958294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.452 [2024-10-30 12:37:53.958310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.452 [2024-10-30 12:37:53.958546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.452 [2024-10-30 12:37:53.958750] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.452 [2024-10-30 12:37:53.958770] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.452 [2024-10-30 12:37:53.958782] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.452 [2024-10-30 12:37:53.961695] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.452 [2024-10-30 12:37:53.971118] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.452 [2024-10-30 12:37:53.971495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-10-30 12:37:53.971523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.452 [2024-10-30 12:37:53.971540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.452 [2024-10-30 12:37:53.971791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.452 [2024-10-30 12:37:53.971996] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.452 [2024-10-30 12:37:53.972016] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.452 [2024-10-30 12:37:53.972028] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.452 [2024-10-30 12:37:53.974942] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.452 [2024-10-30 12:37:53.984737] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.452 [2024-10-30 12:37:53.985159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:53.985208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:53.985225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:53.985451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:53.985705] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.453 [2024-10-30 12:37:53.985739] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.453 [2024-10-30 12:37:53.985752] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.453 [2024-10-30 12:37:53.988824] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.453 [2024-10-30 12:37:53.998398] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.453 [2024-10-30 12:37:53.998786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:53.998820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:53.998837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:53.999067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:53.999323] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.453 [2024-10-30 12:37:53.999347] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.453 [2024-10-30 12:37:53.999361] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.453 [2024-10-30 12:37:54.002586] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.453 [2024-10-30 12:37:54.011694] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.453 [2024-10-30 12:37:54.012065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:54.012094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:54.012110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:54.012346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:54.012579] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.453 [2024-10-30 12:37:54.012600] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.453 [2024-10-30 12:37:54.012614] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.453 [2024-10-30 12:37:54.015835] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.453 [2024-10-30 12:37:54.024925] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.453 [2024-10-30 12:37:54.025339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:54.025369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:54.025386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:54.025637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:54.025841] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.453 [2024-10-30 12:37:54.025862] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.453 [2024-10-30 12:37:54.025874] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.453 [2024-10-30 12:37:54.028903] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.453 [2024-10-30 12:37:54.038289] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.453 [2024-10-30 12:37:54.038719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:54.038769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:54.038794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:54.039056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:54.039273] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.453 [2024-10-30 12:37:54.039296] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.453 [2024-10-30 12:37:54.039325] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.453 [2024-10-30 12:37:54.042337] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.453 [2024-10-30 12:37:54.051515] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.453 [2024-10-30 12:37:54.051837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:54.051866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:54.051883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:54.052100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:54.052338] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.453 [2024-10-30 12:37:54.052360] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.453 [2024-10-30 12:37:54.052374] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.453 [2024-10-30 12:37:54.055169] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.453 [2024-10-30 12:37:54.064612] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.453 [2024-10-30 12:37:54.065019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:54.065048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:54.065064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:54.065312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:54.065532] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.453 [2024-10-30 12:37:54.065569] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.453 [2024-10-30 12:37:54.065582] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.453 [2024-10-30 12:37:54.068382] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.453 [2024-10-30 12:37:54.077630] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.453 [2024-10-30 12:37:54.077976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:54.078005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:54.078022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:54.078269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:54.078464] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.453 [2024-10-30 12:37:54.078489] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.453 [2024-10-30 12:37:54.078503] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.453 [2024-10-30 12:37:54.081412] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.453 [2024-10-30 12:37:54.090712] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.453 [2024-10-30 12:37:54.091024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:54.091052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:54.091068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:54.091304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:54.091513] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.453 [2024-10-30 12:37:54.091535] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.453 [2024-10-30 12:37:54.091548] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.453 [2024-10-30 12:37:54.094426] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.453 [2024-10-30 12:37:54.103766] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.453 [2024-10-30 12:37:54.104110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:54.104139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:54.104155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:54.104424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:54.104620] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.453 [2024-10-30 12:37:54.104641] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.453 [2024-10-30 12:37:54.104654] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.453 [2024-10-30 12:37:54.107530] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.453 [2024-10-30 12:37:54.116879] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.453 [2024-10-30 12:37:54.117227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-10-30 12:37:54.117266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.453 [2024-10-30 12:37:54.117285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.453 [2024-10-30 12:37:54.117524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.453 [2024-10-30 12:37:54.117728] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.454 [2024-10-30 12:37:54.117748] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.454 [2024-10-30 12:37:54.117761] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.454 [2024-10-30 12:37:54.120525] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.454 [2024-10-30 12:37:54.130084] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.454 [2024-10-30 12:37:54.130498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-10-30 12:37:54.130529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.454 [2024-10-30 12:37:54.130546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.454 [2024-10-30 12:37:54.130798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.454 [2024-10-30 12:37:54.130987] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.454 [2024-10-30 12:37:54.131008] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.454 [2024-10-30 12:37:54.131021] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.454 [2024-10-30 12:37:54.134087] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.713 [2024-10-30 12:37:54.143299] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.713 [2024-10-30 12:37:54.143617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.713 [2024-10-30 12:37:54.143644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.713 [2024-10-30 12:37:54.143660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.713 [2024-10-30 12:37:54.143855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.713 [2024-10-30 12:37:54.144077] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.713 [2024-10-30 12:37:54.144096] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.713 [2024-10-30 12:37:54.144109] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.713 [2024-10-30 12:37:54.146947] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.713 [2024-10-30 12:37:54.156408] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.713 [2024-10-30 12:37:54.156792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.713 [2024-10-30 12:37:54.156835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.713 [2024-10-30 12:37:54.156850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.713 [2024-10-30 12:37:54.157066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.713 [2024-10-30 12:37:54.157301] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.713 [2024-10-30 12:37:54.157323] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.713 [2024-10-30 12:37:54.157337] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.713 [2024-10-30 12:37:54.160186] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.713 [2024-10-30 12:37:54.169637] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.713 [2024-10-30 12:37:54.170044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.713 [2024-10-30 12:37:54.170077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.713 [2024-10-30 12:37:54.170094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.713 [2024-10-30 12:37:54.170343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.713 [2024-10-30 12:37:54.170567] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.713 [2024-10-30 12:37:54.170588] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.713 [2024-10-30 12:37:54.170601] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.713 [2024-10-30 12:37:54.173479] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.713 [2024-10-30 12:37:54.182812] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.713 [2024-10-30 12:37:54.183167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.713 [2024-10-30 12:37:54.183217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.713 [2024-10-30 12:37:54.183235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.713 [2024-10-30 12:37:54.183513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.713 [2024-10-30 12:37:54.183719] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.713 [2024-10-30 12:37:54.183740] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.713 [2024-10-30 12:37:54.183752] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.713 [2024-10-30 12:37:54.186625] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.713 [2024-10-30 12:37:54.195856] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.713 [2024-10-30 12:37:54.196208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.713 [2024-10-30 12:37:54.196306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.713 [2024-10-30 12:37:54.196324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.713 [2024-10-30 12:37:54.196572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.713 [2024-10-30 12:37:54.196775] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.713 [2024-10-30 12:37:54.196796] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.713 [2024-10-30 12:37:54.196808] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.713 [2024-10-30 12:37:54.199568] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.713 [2024-10-30 12:37:54.208936] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.713 [2024-10-30 12:37:54.209293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.713 [2024-10-30 12:37:54.209338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.713 [2024-10-30 12:37:54.209355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.713 [2024-10-30 12:37:54.209593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.713 [2024-10-30 12:37:54.209797] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.713 [2024-10-30 12:37:54.209817] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.713 [2024-10-30 12:37:54.209830] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.713 [2024-10-30 12:37:54.212749] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.713 [2024-10-30 12:37:54.221915] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.713 [2024-10-30 12:37:54.222287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.713 [2024-10-30 12:37:54.222314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.713 [2024-10-30 12:37:54.222330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.713 [2024-10-30 12:37:54.222547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.713 [2024-10-30 12:37:54.222751] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.713 [2024-10-30 12:37:54.222770] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.713 [2024-10-30 12:37:54.222783] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.713 [2024-10-30 12:37:54.225660] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.713 [2024-10-30 12:37:54.235058] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.713 [2024-10-30 12:37:54.235492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.713 [2024-10-30 12:37:54.235544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.713 [2024-10-30 12:37:54.235561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.713 [2024-10-30 12:37:54.235816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.713 [2024-10-30 12:37:54.236004] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.713 [2024-10-30 12:37:54.236024] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.713 [2024-10-30 12:37:54.236037] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.713 [2024-10-30 12:37:54.238981] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.713 [2024-10-30 12:37:54.248410] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.713 [2024-10-30 12:37:54.248782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.713 [2024-10-30 12:37:54.248811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.713 [2024-10-30 12:37:54.248829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.713 [2024-10-30 12:37:54.249068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.713 [2024-10-30 12:37:54.249303] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.713 [2024-10-30 12:37:54.249330] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.713 [2024-10-30 12:37:54.249344] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.713 [2024-10-30 12:37:54.252322] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.713 [2024-10-30 12:37:54.261526] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.713 [2024-10-30 12:37:54.261943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.713 [2024-10-30 12:37:54.261971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.713 [2024-10-30 12:37:54.261988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.713 [2024-10-30 12:37:54.262224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.713 [2024-10-30 12:37:54.262459] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.713 [2024-10-30 12:37:54.262480] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.713 [2024-10-30 12:37:54.262492] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.713 [2024-10-30 12:37:54.265362] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.714 [2024-10-30 12:37:54.274667] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.714 [2024-10-30 12:37:54.275015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.714 [2024-10-30 12:37:54.275044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.714 [2024-10-30 12:37:54.275060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.714 [2024-10-30 12:37:54.275307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.714 [2024-10-30 12:37:54.275508] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.714 [2024-10-30 12:37:54.275530] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.714 [2024-10-30 12:37:54.275561] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.714 [2024-10-30 12:37:54.278436] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.714 [2024-10-30 12:37:54.287774] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.714 [2024-10-30 12:37:54.288118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.714 [2024-10-30 12:37:54.288146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.714 [2024-10-30 12:37:54.288162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.714 [2024-10-30 12:37:54.288429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.714 [2024-10-30 12:37:54.288656] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.714 [2024-10-30 12:37:54.288677] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.714 [2024-10-30 12:37:54.288691] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.714 [2024-10-30 12:37:54.291554] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.714 [2024-10-30 12:37:54.300920] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.714 [2024-10-30 12:37:54.301266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.714 [2024-10-30 12:37:54.301294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.714 [2024-10-30 12:37:54.301311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.714 [2024-10-30 12:37:54.301533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.714 [2024-10-30 12:37:54.301739] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.714 [2024-10-30 12:37:54.301760] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.714 [2024-10-30 12:37:54.301772] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.714 [2024-10-30 12:37:54.304609] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.714 [2024-10-30 12:37:54.314277] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.714 [2024-10-30 12:37:54.314654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.714 [2024-10-30 12:37:54.314685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.714 [2024-10-30 12:37:54.314702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.714 [2024-10-30 12:37:54.314946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.714 [2024-10-30 12:37:54.315162] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.714 [2024-10-30 12:37:54.315187] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.714 [2024-10-30 12:37:54.315201] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.714 [2024-10-30 12:37:54.318319] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.714 4490.40 IOPS, 17.54 MiB/s [2024-10-30T11:37:54.395Z] [2024-10-30 12:37:54.327533] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.714 [2024-10-30 12:37:54.327913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.714 [2024-10-30 12:37:54.327941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.714 [2024-10-30 12:37:54.327957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.714 [2024-10-30 12:37:54.328194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.714 [2024-10-30 12:37:54.328431] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.714 [2024-10-30 12:37:54.328453] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.714 [2024-10-30 12:37:54.328468] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.714 [2024-10-30 12:37:54.331512] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.714 [2024-10-30 12:37:54.340764] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.714 [2024-10-30 12:37:54.341114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.714 [2024-10-30 12:37:54.341142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.714 [2024-10-30 12:37:54.341159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.714 [2024-10-30 12:37:54.341413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.714 [2024-10-30 12:37:54.341659] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.714 [2024-10-30 12:37:54.341679] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.714 [2024-10-30 12:37:54.341691] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.714 [2024-10-30 12:37:54.344661] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.714 [2024-10-30 12:37:54.353993] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.714 [2024-10-30 12:37:54.354403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.714 [2024-10-30 12:37:54.354433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.714 [2024-10-30 12:37:54.354449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.714 [2024-10-30 12:37:54.354689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.714 [2024-10-30 12:37:54.354878] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.714 [2024-10-30 12:37:54.354898] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.714 [2024-10-30 12:37:54.354910] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.714 [2024-10-30 12:37:54.357816] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.714 [2024-10-30 12:37:54.367088] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.714 [2024-10-30 12:37:54.367431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.714 [2024-10-30 12:37:54.367460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.714 [2024-10-30 12:37:54.367477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.714 [2024-10-30 12:37:54.367705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.714 [2024-10-30 12:37:54.367909] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.714 [2024-10-30 12:37:54.367940] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.714 [2024-10-30 12:37:54.367952] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.714 [2024-10-30 12:37:54.370836] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.714 [2024-10-30 12:37:54.380306] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.714 [2024-10-30 12:37:54.380687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.714 [2024-10-30 12:37:54.380715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.714 [2024-10-30 12:37:54.380731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.714 [2024-10-30 12:37:54.380954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.714 [2024-10-30 12:37:54.381160] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.714 [2024-10-30 12:37:54.381180] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.714 [2024-10-30 12:37:54.381192] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.714 [2024-10-30 12:37:54.384073] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.714 [2024-10-30 12:37:54.393721] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.714 [2024-10-30 12:37:54.394084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.714 [2024-10-30 12:37:54.394126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.714 [2024-10-30 12:37:54.394143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.714 [2024-10-30 12:37:54.394412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.714 [2024-10-30 12:37:54.394652] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.714 [2024-10-30 12:37:54.394672] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.714 [2024-10-30 12:37:54.394685] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.973 [2024-10-30 12:37:54.397747] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.973 [2024-10-30 12:37:54.406917] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.973 [2024-10-30 12:37:54.407389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.973 [2024-10-30 12:37:54.407418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.973 [2024-10-30 12:37:54.407435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.973 [2024-10-30 12:37:54.407666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.973 [2024-10-30 12:37:54.407871] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.973 [2024-10-30 12:37:54.407891] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.973 [2024-10-30 12:37:54.407904] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.973 [2024-10-30 12:37:54.410800] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.973 [2024-10-30 12:37:54.420095] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.973 [2024-10-30 12:37:54.420475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.973 [2024-10-30 12:37:54.420504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.973 [2024-10-30 12:37:54.420521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.973 [2024-10-30 12:37:54.420770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.973 [2024-10-30 12:37:54.420974] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.973 [2024-10-30 12:37:54.421002] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.973 [2024-10-30 12:37:54.421016] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.973 [2024-10-30 12:37:54.423903] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.973 [2024-10-30 12:37:54.433300] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.973 [2024-10-30 12:37:54.433660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.973 [2024-10-30 12:37:54.433688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.973 [2024-10-30 12:37:54.433704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.973 [2024-10-30 12:37:54.433939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.973 [2024-10-30 12:37:54.434143] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.973 [2024-10-30 12:37:54.434162] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.973 [2024-10-30 12:37:54.434175] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.973 [2024-10-30 12:37:54.437055] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.973 [2024-10-30 12:37:54.446449] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.973 [2024-10-30 12:37:54.446858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.973 [2024-10-30 12:37:54.446897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.973 [2024-10-30 12:37:54.446913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.973 [2024-10-30 12:37:54.447135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.973 [2024-10-30 12:37:54.447379] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.973 [2024-10-30 12:37:54.447400] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.973 [2024-10-30 12:37:54.447414] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.973 [2024-10-30 12:37:54.450365] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.973 [2024-10-30 12:37:54.459790] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.973 [2024-10-30 12:37:54.460250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.973 [2024-10-30 12:37:54.460323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.973 [2024-10-30 12:37:54.460340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.973 [2024-10-30 12:37:54.460585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.973 [2024-10-30 12:37:54.460775] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.973 [2024-10-30 12:37:54.460794] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.973 [2024-10-30 12:37:54.460807] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.973 [2024-10-30 12:37:54.463686] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.973 [2024-10-30 12:37:54.473002] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.973 [2024-10-30 12:37:54.473349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.973 [2024-10-30 12:37:54.473377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.973 [2024-10-30 12:37:54.473393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.973 [2024-10-30 12:37:54.473623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.973 [2024-10-30 12:37:54.473812] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.973 [2024-10-30 12:37:54.473831] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.973 [2024-10-30 12:37:54.473844] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.973 [2024-10-30 12:37:54.476759] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.974 [2024-10-30 12:37:54.486288] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.974 [2024-10-30 12:37:54.486641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.974 [2024-10-30 12:37:54.486672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.974 [2024-10-30 12:37:54.486689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.974 [2024-10-30 12:37:54.486924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.974 [2024-10-30 12:37:54.487129] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.974 [2024-10-30 12:37:54.487149] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.974 [2024-10-30 12:37:54.487161] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.974 [2024-10-30 12:37:54.490035] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.974 [2024-10-30 12:37:54.499710] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.974 [2024-10-30 12:37:54.500081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.974 [2024-10-30 12:37:54.500119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.974 [2024-10-30 12:37:54.500135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.974 [2024-10-30 12:37:54.500397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.974 [2024-10-30 12:37:54.500626] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.974 [2024-10-30 12:37:54.500645] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.974 [2024-10-30 12:37:54.500658] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.974 [2024-10-30 12:37:54.503685] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.974 [2024-10-30 12:37:54.512953] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.974 [2024-10-30 12:37:54.513335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.974 [2024-10-30 12:37:54.513365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.974 [2024-10-30 12:37:54.513381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.974 [2024-10-30 12:37:54.513603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.974 [2024-10-30 12:37:54.513809] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.974 [2024-10-30 12:37:54.513829] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.974 [2024-10-30 12:37:54.513842] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.974 [2024-10-30 12:37:54.516785] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.974 [2024-10-30 12:37:54.526131] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.974 [2024-10-30 12:37:54.526460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.974 [2024-10-30 12:37:54.526503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.974 [2024-10-30 12:37:54.526520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.974 [2024-10-30 12:37:54.526752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.974 [2024-10-30 12:37:54.526956] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.974 [2024-10-30 12:37:54.526975] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.974 [2024-10-30 12:37:54.526988] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.974 [2024-10-30 12:37:54.529999] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.974 [2024-10-30 12:37:54.539298] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.974 [2024-10-30 12:37:54.539642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.974 [2024-10-30 12:37:54.539670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.974 [2024-10-30 12:37:54.539686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.974 [2024-10-30 12:37:54.539922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.974 [2024-10-30 12:37:54.540125] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.974 [2024-10-30 12:37:54.540153] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.974 [2024-10-30 12:37:54.540165] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.974 [2024-10-30 12:37:54.542921] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.974 [2024-10-30 12:37:54.552330] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.974 [2024-10-30 12:37:54.552743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.974 [2024-10-30 12:37:54.552770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.974 [2024-10-30 12:37:54.552796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.974 [2024-10-30 12:37:54.553036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.974 [2024-10-30 12:37:54.553241] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.974 [2024-10-30 12:37:54.553270] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.974 [2024-10-30 12:37:54.553290] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.974 [2024-10-30 12:37:54.556036] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.974 [2024-10-30 12:37:54.565320] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.974 [2024-10-30 12:37:54.565680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.974 [2024-10-30 12:37:54.565708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.974 [2024-10-30 12:37:54.565724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.974 [2024-10-30 12:37:54.565967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.974 [2024-10-30 12:37:54.566173] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.974 [2024-10-30 12:37:54.566192] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.974 [2024-10-30 12:37:54.566205] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.974 [2024-10-30 12:37:54.569115] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.974 [2024-10-30 12:37:54.578407] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.974 [2024-10-30 12:37:54.578834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.974 [2024-10-30 12:37:54.578863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.974 [2024-10-30 12:37:54.578878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.974 [2024-10-30 12:37:54.579112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.974 [2024-10-30 12:37:54.579344] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.974 [2024-10-30 12:37:54.579365] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.974 [2024-10-30 12:37:54.579378] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.974 [2024-10-30 12:37:54.582223] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.975 [2024-10-30 12:37:54.591516] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.975 [2024-10-30 12:37:54.591920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.975 [2024-10-30 12:37:54.591948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.975 [2024-10-30 12:37:54.591963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.975 [2024-10-30 12:37:54.592202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.975 [2024-10-30 12:37:54.592434] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.975 [2024-10-30 12:37:54.592460] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.975 [2024-10-30 12:37:54.592474] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.975 [2024-10-30 12:37:54.595399] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.975 [2024-10-30 12:37:54.604612] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.975 [2024-10-30 12:37:54.604955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.975 [2024-10-30 12:37:54.604983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.975 [2024-10-30 12:37:54.605000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.975 [2024-10-30 12:37:54.605234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.975 [2024-10-30 12:37:54.605437] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.975 [2024-10-30 12:37:54.605458] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.975 [2024-10-30 12:37:54.605471] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.975 [2024-10-30 12:37:54.608378] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.975 [2024-10-30 12:37:54.617834] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.975 [2024-10-30 12:37:54.618237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.975 [2024-10-30 12:37:54.618288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.975 [2024-10-30 12:37:54.618315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.975 [2024-10-30 12:37:54.618569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.975 [2024-10-30 12:37:54.618758] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.975 [2024-10-30 12:37:54.618777] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.975 [2024-10-30 12:37:54.618789] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.975 [2024-10-30 12:37:54.621662] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.975 [2024-10-30 12:37:54.631009] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.975 [2024-10-30 12:37:54.631316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.975 [2024-10-30 12:37:54.631357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.975 [2024-10-30 12:37:54.631373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.975 [2024-10-30 12:37:54.631592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.975 [2024-10-30 12:37:54.631797] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.975 [2024-10-30 12:37:54.631817] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.975 [2024-10-30 12:37:54.631829] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.975 [2024-10-30 12:37:54.634712] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.975 [2024-10-30 12:37:54.644192] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.975 [2024-10-30 12:37:54.644606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.975 [2024-10-30 12:37:54.644635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:21.975 [2024-10-30 12:37:54.644674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:21.975 [2024-10-30 12:37:54.644903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:21.975 [2024-10-30 12:37:54.645092] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.975 [2024-10-30 12:37:54.645112] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.975 [2024-10-30 12:37:54.645124] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.975 [2024-10-30 12:37:54.647966] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.233 [2024-10-30 12:37:54.657617] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.233 [2024-10-30 12:37:54.658007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.233 [2024-10-30 12:37:54.658061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.233 [2024-10-30 12:37:54.658086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.233 [2024-10-30 12:37:54.658341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.233 [2024-10-30 12:37:54.658536] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.233 [2024-10-30 12:37:54.658555] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.233 [2024-10-30 12:37:54.658582] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.233 [2024-10-30 12:37:54.661570] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.233 [2024-10-30 12:37:54.670764] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.233 [2024-10-30 12:37:54.671116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.233 [2024-10-30 12:37:54.671143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.233 [2024-10-30 12:37:54.671159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.233 [2024-10-30 12:37:54.671417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.233 [2024-10-30 12:37:54.671627] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.233 [2024-10-30 12:37:54.671647] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.233 [2024-10-30 12:37:54.671659] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.233 [2024-10-30 12:37:54.674515] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.233 [2024-10-30 12:37:54.683765] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.233 [2024-10-30 12:37:54.684163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.233 [2024-10-30 12:37:54.684218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.233 [2024-10-30 12:37:54.684239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.233 [2024-10-30 12:37:54.684512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.233 [2024-10-30 12:37:54.684738] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.233 [2024-10-30 12:37:54.684758] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.233 [2024-10-30 12:37:54.684771] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.233 [2024-10-30 12:37:54.687661] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.233 [2024-10-30 12:37:54.696976] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.233 [2024-10-30 12:37:54.697321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.233 [2024-10-30 12:37:54.697351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.233 [2024-10-30 12:37:54.697368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.233 [2024-10-30 12:37:54.697608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.233 [2024-10-30 12:37:54.697813] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.233 [2024-10-30 12:37:54.697832] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.233 [2024-10-30 12:37:54.697845] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.233 [2024-10-30 12:37:54.700719] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.233 [2024-10-30 12:37:54.710131] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.233 [2024-10-30 12:37:54.710529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.233 [2024-10-30 12:37:54.710558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.233 [2024-10-30 12:37:54.710596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.233 [2024-10-30 12:37:54.710831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.233 [2024-10-30 12:37:54.711035] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.233 [2024-10-30 12:37:54.711055] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.233 [2024-10-30 12:37:54.711067] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.713966] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.723248] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.723595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.723622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.234 [2024-10-30 12:37:54.723638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.234 [2024-10-30 12:37:54.723878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.234 [2024-10-30 12:37:54.724083] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.234 [2024-10-30 12:37:54.724102] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.234 [2024-10-30 12:37:54.724115] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.726911] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.736356] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.736761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.736789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.234 [2024-10-30 12:37:54.736805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.234 [2024-10-30 12:37:54.737041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.234 [2024-10-30 12:37:54.737245] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.234 [2024-10-30 12:37:54.737284] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.234 [2024-10-30 12:37:54.737297] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.740046] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.749498] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.749825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.749853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.234 [2024-10-30 12:37:54.749870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.234 [2024-10-30 12:37:54.750096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.234 [2024-10-30 12:37:54.750328] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.234 [2024-10-30 12:37:54.750348] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.234 [2024-10-30 12:37:54.750361] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.753369] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.762690] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.763095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.763123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.234 [2024-10-30 12:37:54.763139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.234 [2024-10-30 12:37:54.763405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.234 [2024-10-30 12:37:54.763617] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.234 [2024-10-30 12:37:54.763641] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.234 [2024-10-30 12:37:54.763654] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.766519] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.775856] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.776234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.776277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.234 [2024-10-30 12:37:54.776310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.234 [2024-10-30 12:37:54.776550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.234 [2024-10-30 12:37:54.776756] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.234 [2024-10-30 12:37:54.776775] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.234 [2024-10-30 12:37:54.776788] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.779659] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.788890] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.789233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.789268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.234 [2024-10-30 12:37:54.789286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.234 [2024-10-30 12:37:54.789521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.234 [2024-10-30 12:37:54.789725] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.234 [2024-10-30 12:37:54.789744] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.234 [2024-10-30 12:37:54.789756] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.792511] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.802026] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.802440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.802469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.234 [2024-10-30 12:37:54.802487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.234 [2024-10-30 12:37:54.802726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.234 [2024-10-30 12:37:54.802928] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.234 [2024-10-30 12:37:54.802946] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.234 [2024-10-30 12:37:54.802958] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.805876] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.815225] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.815570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.815598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.234 [2024-10-30 12:37:54.815614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.234 [2024-10-30 12:37:54.815831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.234 [2024-10-30 12:37:54.816036] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.234 [2024-10-30 12:37:54.816056] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.234 [2024-10-30 12:37:54.816068] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.818953] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.828406] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.828832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.828861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.234 [2024-10-30 12:37:54.828877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.234 [2024-10-30 12:37:54.829129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.234 [2024-10-30 12:37:54.829346] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.234 [2024-10-30 12:37:54.829367] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.234 [2024-10-30 12:37:54.829379] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.832265] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.841512] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.841857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.841884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.234 [2024-10-30 12:37:54.841900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.234 [2024-10-30 12:37:54.842132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.234 [2024-10-30 12:37:54.842382] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.234 [2024-10-30 12:37:54.842404] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.234 [2024-10-30 12:37:54.842417] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.234 [2024-10-30 12:37:54.845301] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.234 [2024-10-30 12:37:54.854587] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.234 [2024-10-30 12:37:54.854933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.234 [2024-10-30 12:37:54.854960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.235 [2024-10-30 12:37:54.854977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.235 [2024-10-30 12:37:54.855193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.235 [2024-10-30 12:37:54.855425] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.235 [2024-10-30 12:37:54.855446] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.235 [2024-10-30 12:37:54.855458] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.235 [2024-10-30 12:37:54.858268] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.235 [2024-10-30 12:37:54.867683] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.235 [2024-10-30 12:37:54.868101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.235 [2024-10-30 12:37:54.868129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.235 [2024-10-30 12:37:54.868147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.235 [2024-10-30 12:37:54.868391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.235 [2024-10-30 12:37:54.868615] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.235 [2024-10-30 12:37:54.868634] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.235 [2024-10-30 12:37:54.868647] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.235 [2024-10-30 12:37:54.871501] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.235 [2024-10-30 12:37:54.880787] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.235 [2024-10-30 12:37:54.881244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.235 [2024-10-30 12:37:54.881280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.235 [2024-10-30 12:37:54.881312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.235 [2024-10-30 12:37:54.881556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.235 [2024-10-30 12:37:54.881761] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.235 [2024-10-30 12:37:54.881781] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.235 [2024-10-30 12:37:54.881793] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.235 [2024-10-30 12:37:54.884639] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.235 [2024-10-30 12:37:54.893956] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.235 [2024-10-30 12:37:54.894288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.235 [2024-10-30 12:37:54.894317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.235 [2024-10-30 12:37:54.894338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.235 [2024-10-30 12:37:54.894584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.235 [2024-10-30 12:37:54.894774] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.235 [2024-10-30 12:37:54.894793] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.235 [2024-10-30 12:37:54.894806] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.235 [2024-10-30 12:37:54.897608] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.235 [2024-10-30 12:37:54.907037] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.235 [2024-10-30 12:37:54.907456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.235 [2024-10-30 12:37:54.907485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.235 [2024-10-30 12:37:54.907502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.235 [2024-10-30 12:37:54.907743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.235 [2024-10-30 12:37:54.907947] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.235 [2024-10-30 12:37:54.907966] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.235 [2024-10-30 12:37:54.907979] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.235 [2024-10-30 12:37:54.910869] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.493 [2024-10-30 12:37:54.920209] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.493 [2024-10-30 12:37:54.920531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.493 [2024-10-30 12:37:54.920586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.493 [2024-10-30 12:37:54.920617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.493 [2024-10-30 12:37:54.920853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:54.921083] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:54.921114] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:54.921126] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.494 [2024-10-30 12:37:54.924055] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.494 [2024-10-30 12:37:54.933331] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.494 [2024-10-30 12:37:54.933686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.494 [2024-10-30 12:37:54.933736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.494 [2024-10-30 12:37:54.933752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.494 [2024-10-30 12:37:54.933967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:54.934188] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:54.934211] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:54.934224] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.494 [2024-10-30 12:37:54.937098] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.494 [2024-10-30 12:37:54.946565] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.494 [2024-10-30 12:37:54.946976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.494 [2024-10-30 12:37:54.947004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.494 [2024-10-30 12:37:54.947030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.494 [2024-10-30 12:37:54.947270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:54.947474] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:54.947494] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:54.947506] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.494 [2024-10-30 12:37:54.950279] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.494 [2024-10-30 12:37:54.959561] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.494 [2024-10-30 12:37:54.959951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.494 [2024-10-30 12:37:54.960005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.494 [2024-10-30 12:37:54.960027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.494 [2024-10-30 12:37:54.960286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:54.960494] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:54.960514] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:54.960526] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 720685 Killed "${NVMF_APP[@]}" "$@" 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.494 [2024-10-30 12:37:54.963587] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=721635 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 721635 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 721635 ']' 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:22.494 12:37:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.494 [2024-10-30 12:37:54.972954] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.494 [2024-10-30 12:37:54.973352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.494 [2024-10-30 12:37:54.973385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.494 [2024-10-30 12:37:54.973403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.494 [2024-10-30 12:37:54.973630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:54.973847] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:54.973867] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:54.973880] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.494 [2024-10-30 12:37:54.977004] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.494 [2024-10-30 12:37:54.986360] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.494 [2024-10-30 12:37:54.986770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.494 [2024-10-30 12:37:54.986808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.494 [2024-10-30 12:37:54.986824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.494 [2024-10-30 12:37:54.987047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:54.987316] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:54.987338] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:54.987352] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.494 [2024-10-30 12:37:54.990454] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.494 [2024-10-30 12:37:54.999940] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.494 [2024-10-30 12:37:55.000295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.494 [2024-10-30 12:37:55.000325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.494 [2024-10-30 12:37:55.000343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.494 [2024-10-30 12:37:55.000573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:55.000812] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:55.000833] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:55.000862] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.494 [2024-10-30 12:37:55.004171] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.494 [2024-10-30 12:37:55.013635] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.494 [2024-10-30 12:37:55.014003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.494 [2024-10-30 12:37:55.014032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.494 [2024-10-30 12:37:55.014057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.494 [2024-10-30 12:37:55.014281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:55.014513] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:55.014562] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:55.014576] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.494 [2024-10-30 12:37:55.015164] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:26:22.494 [2024-10-30 12:37:55.015221] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.494 [2024-10-30 12:37:55.017705] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.494 [2024-10-30 12:37:55.027172] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.494 [2024-10-30 12:37:55.027522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.494 [2024-10-30 12:37:55.027551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.494 [2024-10-30 12:37:55.027570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.494 [2024-10-30 12:37:55.027802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:55.028035] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:55.028055] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:55.028069] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.494 [2024-10-30 12:37:55.031297] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.494 [2024-10-30 12:37:55.040788] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.494 [2024-10-30 12:37:55.041225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.494 [2024-10-30 12:37:55.041266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.494 [2024-10-30 12:37:55.041286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.494 [2024-10-30 12:37:55.041502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:55.041726] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:55.041752] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:55.041765] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.494 [2024-10-30 12:37:55.044996] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.494 [2024-10-30 12:37:55.054375] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.494 [2024-10-30 12:37:55.054759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.494 [2024-10-30 12:37:55.054787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.494 [2024-10-30 12:37:55.054824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.494 [2024-10-30 12:37:55.055054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.494 [2024-10-30 12:37:55.055314] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.494 [2024-10-30 12:37:55.055338] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.494 [2024-10-30 12:37:55.055353] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.495 [2024-10-30 12:37:55.058661] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.495 [2024-10-30 12:37:55.068010] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.495 [2024-10-30 12:37:55.068355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.495 [2024-10-30 12:37:55.068383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.495 [2024-10-30 12:37:55.068400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.495 [2024-10-30 12:37:55.068629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.495 [2024-10-30 12:37:55.068842] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.495 [2024-10-30 12:37:55.068863] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.495 [2024-10-30 12:37:55.068876] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.495 [2024-10-30 12:37:55.072147] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.495 [2024-10-30 12:37:55.081500] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.495 [2024-10-30 12:37:55.081864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.495 [2024-10-30 12:37:55.081893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.495 [2024-10-30 12:37:55.081909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.495 [2024-10-30 12:37:55.082124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.495 [2024-10-30 12:37:55.082389] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.495 [2024-10-30 12:37:55.082412] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.495 [2024-10-30 12:37:55.082426] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.495 [2024-10-30 12:37:55.085646] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.495 [2024-10-30 12:37:55.094376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:22.495 [2024-10-30 12:37:55.095143] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.495 [2024-10-30 12:37:55.095490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.495 [2024-10-30 12:37:55.095521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.495 [2024-10-30 12:37:55.095538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.495 [2024-10-30 12:37:55.095753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.495 [2024-10-30 12:37:55.095986] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.495 [2024-10-30 12:37:55.096008] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.495 [2024-10-30 12:37:55.096021] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.495 [2024-10-30 12:37:55.099217] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.495 [2024-10-30 12:37:55.108682] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.495 [2024-10-30 12:37:55.109157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.495 [2024-10-30 12:37:55.109190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.495 [2024-10-30 12:37:55.109210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.495 [2024-10-30 12:37:55.109444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.495 [2024-10-30 12:37:55.109708] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.495 [2024-10-30 12:37:55.109728] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.495 [2024-10-30 12:37:55.109743] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.495 [2024-10-30 12:37:55.112958] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.495 [2024-10-30 12:37:55.122368] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.495 [2024-10-30 12:37:55.122717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.495 [2024-10-30 12:37:55.122759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.495 [2024-10-30 12:37:55.122776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.495 [2024-10-30 12:37:55.122991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.495 [2024-10-30 12:37:55.123222] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.495 [2024-10-30 12:37:55.123266] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.495 [2024-10-30 12:37:55.123294] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.495 [2024-10-30 12:37:55.126621] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.495 [2024-10-30 12:37:55.135911] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.495 [2024-10-30 12:37:55.136317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.495 [2024-10-30 12:37:55.136347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.495 [2024-10-30 12:37:55.136363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.495 [2024-10-30 12:37:55.136594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.495 [2024-10-30 12:37:55.136817] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.495 [2024-10-30 12:37:55.136839] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.495 [2024-10-30 12:37:55.136852] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.495 [2024-10-30 12:37:55.139987] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.495 [2024-10-30 12:37:55.149341] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.495 [2024-10-30 12:37:55.149746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.495 [2024-10-30 12:37:55.149774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.495 [2024-10-30 12:37:55.149790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.495 [2024-10-30 12:37:55.150033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.495 [2024-10-30 12:37:55.150232] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.495 [2024-10-30 12:37:55.150289] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.495 [2024-10-30 12:37:55.150304] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.495 [2024-10-30 12:37:55.153456] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.495 [2024-10-30 12:37:55.156563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.495 [2024-10-30 12:37:55.156607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.495 [2024-10-30 12:37:55.156621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.495 [2024-10-30 12:37:55.156632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.495 [2024-10-30 12:37:55.156641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.495 [2024-10-30 12:37:55.158050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.495 [2024-10-30 12:37:55.158129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.495 [2024-10-30 12:37:55.158132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.495 [2024-10-30 12:37:55.162890] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.495 [2024-10-30 12:37:55.163328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.495 [2024-10-30 12:37:55.163360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.495 [2024-10-30 12:37:55.163379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.495 [2024-10-30 12:37:55.163615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.495 [2024-10-30 12:37:55.163830] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.495 [2024-10-30 12:37:55.163859] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.495 [2024-10-30 12:37:55.163876] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.495 [2024-10-30 12:37:55.167092] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.755 [2024-10-30 12:37:55.176725] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.755 [2024-10-30 12:37:55.177215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-10-30 12:37:55.177270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.755 [2024-10-30 12:37:55.177301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.755 [2024-10-30 12:37:55.177525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.755 [2024-10-30 12:37:55.177760] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.755 [2024-10-30 12:37:55.177782] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.755 [2024-10-30 12:37:55.177797] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.755 [2024-10-30 12:37:55.181046] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.755 [2024-10-30 12:37:55.190375] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.755 [2024-10-30 12:37:55.190865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.755 [2024-10-30 12:37:55.190903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.755 [2024-10-30 12:37:55.190923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.755 [2024-10-30 12:37:55.191164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.755 [2024-10-30 12:37:55.191389] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.755 [2024-10-30 12:37:55.191412] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.755 [2024-10-30 12:37:55.191428] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.755 [2024-10-30 12:37:55.194590] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.755 [2024-10-30 12:37:55.204021] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.755 [2024-10-30 12:37:55.204529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-10-30 12:37:55.204569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.756 [2024-10-30 12:37:55.204589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.756 [2024-10-30 12:37:55.204826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.756 [2024-10-30 12:37:55.205043] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.756 [2024-10-30 12:37:55.205064] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.756 [2024-10-30 12:37:55.205079] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.756 [2024-10-30 12:37:55.208246] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.756 [2024-10-30 12:37:55.217683] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.756 [2024-10-30 12:37:55.218124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-10-30 12:37:55.218158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.756 [2024-10-30 12:37:55.218177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.756 [2024-10-30 12:37:55.218411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.756 [2024-10-30 12:37:55.218648] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.756 [2024-10-30 12:37:55.218669] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.756 [2024-10-30 12:37:55.218684] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.756 [2024-10-30 12:37:55.221911] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.756 [2024-10-30 12:37:55.231297] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.756 [2024-10-30 12:37:55.231802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-10-30 12:37:55.231841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.756 [2024-10-30 12:37:55.231860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.756 [2024-10-30 12:37:55.232083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.756 [2024-10-30 12:37:55.232328] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.756 [2024-10-30 12:37:55.232350] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.756 [2024-10-30 12:37:55.232365] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.756 [2024-10-30 12:37:55.235553] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.756 [2024-10-30 12:37:55.244776] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.756 [2024-10-30 12:37:55.245108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-10-30 12:37:55.245138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.756 [2024-10-30 12:37:55.245154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.756 [2024-10-30 12:37:55.245381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.756 [2024-10-30 12:37:55.245614] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.756 [2024-10-30 12:37:55.245636] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.756 [2024-10-30 12:37:55.245649] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.756 [2024-10-30 12:37:55.248856] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.756 [2024-10-30 12:37:55.258312] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.756 [2024-10-30 12:37:55.258663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-10-30 12:37:55.258700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.756 [2024-10-30 12:37:55.258727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.756 [2024-10-30 12:37:55.258942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.756 [2024-10-30 12:37:55.259162] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.756 [2024-10-30 12:37:55.259184] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.756 [2024-10-30 12:37:55.259199] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.756 [2024-10-30 12:37:55.262479] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.756 [2024-10-30 12:37:55.271988] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.756 [2024-10-30 12:37:55.272318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-10-30 12:37:55.272347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.756 [2024-10-30 12:37:55.272365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.756 [2024-10-30 12:37:55.272580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.756 [2024-10-30 12:37:55.272800] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.756 [2024-10-30 12:37:55.272822] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.756 [2024-10-30 12:37:55.272835] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.756 [2024-10-30 12:37:55.276111] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.756 [2024-10-30 12:37:55.285469] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.756 [2024-10-30 12:37:55.285848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-10-30 12:37:55.285876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.756 [2024-10-30 12:37:55.285893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.756 [2024-10-30 12:37:55.286108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.756 [2024-10-30 12:37:55.286364] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.756 [2024-10-30 12:37:55.286387] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.756 [2024-10-30 12:37:55.286401] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.756 [2024-10-30 12:37:55.289673] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.756 [2024-10-30 12:37:55.299122] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.756 [2024-10-30 12:37:55.299475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-10-30 12:37:55.299505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.756 [2024-10-30 12:37:55.299521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.756 [2024-10-30 12:37:55.299736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.756 [2024-10-30 12:37:55.299964] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.756 [2024-10-30 12:37:55.299986] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.756 [2024-10-30 12:37:55.299999] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.756 [2024-10-30 12:37:55.303207] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.756 [2024-10-30 12:37:55.304994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.756 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.756 [2024-10-30 12:37:55.312668] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.756 [2024-10-30 12:37:55.313003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-10-30 12:37:55.313031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.756 [2024-10-30 12:37:55.313047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.756 [2024-10-30 12:37:55.313271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.756 [2024-10-30 12:37:55.313491] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.756 [2024-10-30 12:37:55.313512] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.756 [2024-10-30 12:37:55.313526] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.756 [2024-10-30 12:37:55.316826] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.756 3742.00 IOPS, 14.62 MiB/s [2024-10-30T11:37:55.437Z] [2024-10-30 12:37:55.327612] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.756 [2024-10-30 12:37:55.328055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.756 [2024-10-30 12:37:55.328089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.757 [2024-10-30 12:37:55.328107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.757 [2024-10-30 12:37:55.328337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.757 [2024-10-30 12:37:55.328588] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.757 [2024-10-30 12:37:55.328624] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.757 [2024-10-30 12:37:55.328639] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.757 [2024-10-30 12:37:55.331825] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.757 [2024-10-30 12:37:55.341173] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.757 [2024-10-30 12:37:55.341618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-10-30 12:37:55.341653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.757 [2024-10-30 12:37:55.341671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.757 [2024-10-30 12:37:55.341908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.757 [2024-10-30 12:37:55.342122] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.757 [2024-10-30 12:37:55.342144] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.757 [2024-10-30 12:37:55.342159] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.757 [2024-10-30 12:37:55.345355] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.757 Malloc0 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.757 [2024-10-30 12:37:55.354882] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.757 [2024-10-30 12:37:55.355253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.757 [2024-10-30 12:37:55.355288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1339a40 with addr=10.0.0.2, port=4420 00:26:22.757 [2024-10-30 12:37:55.355304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1339a40 is same with the state(6) to be set 00:26:22.757 [2024-10-30 12:37:55.355521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339a40 (9): Bad file descriptor 00:26:22.757 [2024-10-30 12:37:55.355748] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.757 [2024-10-30 12:37:55.355769] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.757 [2024-10-30 12:37:55.355783] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.757 [2024-10-30 12:37:55.359063] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.757 [2024-10-30 12:37:55.365424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.757 [2024-10-30 12:37:55.368578] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.757 12:37:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 720971 00:26:23.017 [2024-10-30 12:37:55.445426] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:24.888 4277.43 IOPS, 16.71 MiB/s [2024-10-30T11:37:58.526Z] 4817.62 IOPS, 18.82 MiB/s [2024-10-30T11:37:59.462Z] 5242.11 IOPS, 20.48 MiB/s [2024-10-30T11:38:00.398Z] 5579.30 IOPS, 21.79 MiB/s [2024-10-30T11:38:01.769Z] 5846.91 IOPS, 22.84 MiB/s [2024-10-30T11:38:02.702Z] 6067.42 IOPS, 23.70 MiB/s [2024-10-30T11:38:03.634Z] 6259.23 IOPS, 24.45 MiB/s [2024-10-30T11:38:04.566Z] 6419.71 IOPS, 25.08 MiB/s [2024-10-30T11:38:04.566Z] 6565.73 IOPS, 25.65 MiB/s 00:26:31.885 Latency(us) 00:26:31.885 [2024-10-30T11:38:04.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.885 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:31.885 Verification LBA range: start 0x0 length 0x4000 00:26:31.885 Nvme1n1 : 15.05 6549.00 25.58 10213.53 0.00 7594.09 603.78 47380.10 00:26:31.885 [2024-10-30T11:38:04.566Z] =================================================================================================================== 00:26:31.885 [2024-10-30T11:38:04.566Z] Total : 6549.00 25.58 10213.53 0.00 7594.09 603.78 47380.10 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:32.142 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.143 rmmod nvme_tcp 00:26:32.143 rmmod nvme_fabrics 00:26:32.143 rmmod nvme_keyring 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 721635 ']' 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 721635 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 721635 ']' 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 721635 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 721635 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 721635' 00:26:32.143 killing process with pid 721635 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 721635 00:26:32.143 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 721635 00:26:32.402 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.402 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.402 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.402 12:38:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:32.402 12:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:32.402 12:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.402 12:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.402 12:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.402 12:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.402 12:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.402 12:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.402 12:38:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:34.933 00:26:34.933 real 0m22.526s 00:26:34.933 user 0m59.518s 00:26:34.933 sys 0m4.465s 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.933 ************************************ 00:26:34.933 END TEST nvmf_bdevperf 00:26:34.933 ************************************ 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.933 ************************************ 00:26:34.933 START TEST nvmf_target_disconnect 00:26:34.933 ************************************ 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:34.933 * Looking for test storage... 00:26:34.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:34.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.933 --rc genhtml_branch_coverage=1 00:26:34.933 --rc genhtml_function_coverage=1 00:26:34.933 --rc genhtml_legend=1 00:26:34.933 --rc geninfo_all_blocks=1 00:26:34.933 --rc geninfo_unexecuted_blocks=1 00:26:34.933 00:26:34.933 ' 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:34.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.933 --rc genhtml_branch_coverage=1 00:26:34.933 --rc genhtml_function_coverage=1 00:26:34.933 --rc genhtml_legend=1 00:26:34.933 --rc geninfo_all_blocks=1 00:26:34.933 --rc geninfo_unexecuted_blocks=1 00:26:34.933 00:26:34.933 ' 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:34.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.933 --rc genhtml_branch_coverage=1 00:26:34.933 --rc genhtml_function_coverage=1 00:26:34.933 --rc genhtml_legend=1 00:26:34.933 --rc geninfo_all_blocks=1 00:26:34.933 --rc geninfo_unexecuted_blocks=1 00:26:34.933 00:26:34.933 ' 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:34.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.933 --rc genhtml_branch_coverage=1 00:26:34.933 --rc genhtml_function_coverage=1 00:26:34.933 --rc genhtml_legend=1 00:26:34.933 --rc geninfo_all_blocks=1 00:26:34.933 --rc geninfo_unexecuted_blocks=1 00:26:34.933 00:26:34.933 ' 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.933 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:34.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:34.934 12:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:36.836 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:36.836 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:36.836 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:36.836 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.836 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:37.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:26:37.095 00:26:37.095 --- 10.0.0.2 ping statistics --- 00:26:37.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.095 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:26:37.095 00:26:37.095 --- 10.0.0.1 ping statistics --- 00:26:37.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.095 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:37.095 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 ************************************ 00:26:37.096 START TEST nvmf_target_disconnect_tc1 00:26:37.096 ************************************ 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:37.096 [2024-10-30 12:38:09.697894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.096 [2024-10-30 12:38:09.697967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165cf40 with addr=10.0.0.2, port=4420 00:26:37.096 [2024-10-30 12:38:09.697999] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:37.096 [2024-10-30 12:38:09.698022] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:37.096 [2024-10-30 12:38:09.698036] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:37.096 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:37.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:37.096 Initializing NVMe Controllers 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:37.096 00:26:37.096 real 0m0.094s 00:26:37.096 user 0m0.036s 00:26:37.096 sys 0m0.058s 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 ************************************ 00:26:37.096 END TEST nvmf_target_disconnect_tc1 00:26:37.096 ************************************ 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 ************************************ 00:26:37.096 START TEST nvmf_target_disconnect_tc2 00:26:37.096 ************************************ 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=724799 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 724799 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 724799 ']' 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:37.096 12:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.353 [2024-10-30 12:38:09.816151] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:26:37.353 [2024-10-30 12:38:09.816241] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.353 [2024-10-30 12:38:09.895197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:37.353 [2024-10-30 12:38:09.955087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.353 [2024-10-30 12:38:09.955144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.353 [2024-10-30 12:38:09.955172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.353 [2024-10-30 12:38:09.955184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.353 [2024-10-30 12:38:09.955194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.353 [2024-10-30 12:38:09.956843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:37.353 [2024-10-30 12:38:09.956867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:37.353 [2024-10-30 12:38:09.956925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:37.353 [2024-10-30 12:38:09.956928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:37.610 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:37.610 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:37.610 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:37.610 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:37.610 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.611 Malloc0 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.611 [2024-10-30 12:38:10.151420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.611 [2024-10-30 12:38:10.179732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=724937 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:37.611 12:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:40.154 12:38:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 724799 00:26:40.154 12:38:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 [2024-10-30 12:38:12.206556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 [2024-10-30 12:38:12.206919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Read completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.154 starting I/O failed 00:26:40.154 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 [2024-10-30 12:38:12.207212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Write completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 Read completed with error (sct=0, sc=8) 00:26:40.155 starting I/O failed 00:26:40.155 [2024-10-30 12:38:12.207544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:40.155 [2024-10-30 12:38:12.207711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.207759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.207916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.207944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.208068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.208095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.208196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.208222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.208324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.208351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.208469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.208495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.208626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.208652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.208734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.208760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.208862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.208889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.208981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.209007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.209123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.209149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.209252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.209285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.209383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.209409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.155 [2024-10-30 12:38:12.209493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.155 [2024-10-30 12:38:12.209519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.155 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.209665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.209690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.209836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.209862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.209954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.209981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.210128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.210154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.210264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.210292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.210373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.210400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.210488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.210514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.210615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.210662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.210807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.210833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.210945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.210972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.211063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.211090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.211208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.211235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.211339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.211367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.211464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.211492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.211631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.211671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.211770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.211797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.211921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.211947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.212066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.212092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.212173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.212199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.212301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.212329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.212414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.212445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.212533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.212559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.212671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.212696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.212817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.212845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.212965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.212992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.213107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.213134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.213223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.213251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.213345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.213372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.213459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.213487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.213605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.213631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.213742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.213768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.213860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.213890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.213979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.214007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.214177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.156 [2024-10-30 12:38:12.214241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.156 qpair failed and we were unable to recover it. 00:26:40.156 [2024-10-30 12:38:12.214362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.214391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.214490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.214516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.214658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.214685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.214778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.214806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.214895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.214922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.215048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.215075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.215196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.215224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.215369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.215397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.215490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.215517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.215636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.215662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.215740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.215766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.215881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.215938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.216060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.216085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.216213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.216244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.216348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.216376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.216511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.216550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.216710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.216749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.216841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.216868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.217012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.217037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.217154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.217180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.217269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.217299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.217394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.217419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.217534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.217559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.217698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.217723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.217833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.217858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.217982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.218007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.218098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.218123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.218242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.218273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.218361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.218387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.218509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.218549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.218634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.218663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.218807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.218834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.218942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.218969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.219080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.157 [2024-10-30 12:38:12.219107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.157 qpair failed and we were unable to recover it. 00:26:40.157 [2024-10-30 12:38:12.219213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.219252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.219360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.219387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.219502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.219529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.219640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.219666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.219750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.219777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.219864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.219890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.220057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.220091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.220208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.220233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.220371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.220402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.220492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.220520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.220610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.220637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.220724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.220751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.220837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.220863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.220975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.221001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.221088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.221115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.221222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.221248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.221361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.221387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.221477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.221503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.221593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.221619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.221766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.221794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.221894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.221920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.222014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.222043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.222159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.222185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.222286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.222313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.222395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.222421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.222507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.222534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.222625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.222652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.222789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.222817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.222927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.222954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.223068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.223094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.223174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.223201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.223348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.223377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.223460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.158 [2024-10-30 12:38:12.223487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.158 qpair failed and we were unable to recover it. 00:26:40.158 [2024-10-30 12:38:12.223585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.223613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.223694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.223720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.223832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.223858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.223973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.223999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.224109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.224137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.224288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.224317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.224436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.224464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.224607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.224634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.224776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.224802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.224885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.224912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.225033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.225059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.225202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.225230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.225359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.225386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.225532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.225562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.225648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.225675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.225764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.225789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.225900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.225927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.226045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.226073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.226160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.226186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.226284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.226311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.226407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.226433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.226543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.226568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.226655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.226682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.226803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.226829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.226910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.226936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.227025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.227053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.227173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.227198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.159 [2024-10-30 12:38:12.227302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.159 [2024-10-30 12:38:12.227329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.159 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.227445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.227471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.227585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.227611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.227728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.227753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.227891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.227918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.228007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.228036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.228170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.228208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.228334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.228363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.228456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.228482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.228571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.228597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.228737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.228763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.228947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.229007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.229128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.229156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.229242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.229277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.229372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.229399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.229516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.229541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.229626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.229652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.229770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.229795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.229881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.229907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.230011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.230051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.230201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.230229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.230329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.230358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.230455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.230481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.230570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.230598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.230710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.230736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.230818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.230843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.230984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.231010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.231138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.231164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.231324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.231352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.231469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.231495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.231631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.231657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.231799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.231826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.231932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.231958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.232050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.232076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.160 [2024-10-30 12:38:12.232185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.160 [2024-10-30 12:38:12.232213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.160 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.232338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.232370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.232462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.232489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.232571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.232597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.232736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.232764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.232876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.232903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.233016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.233043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.233164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.233192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.233306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.233333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.233472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.233498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.233614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.233640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.233723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.233749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.233861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.233922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.234037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.234065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.234163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.234203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.234301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.234329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.234453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.234479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.234568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.234595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.234710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.234737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.234904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.234976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.235070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.235098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.235190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.235219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.235314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.235341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.235457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.235484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.235590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.235616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.235727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.235752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.235842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.235869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.235958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.235986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.236133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.236161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.236286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.236312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.236426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.236452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.236585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.236610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.236725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.236751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.236846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.236871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.236984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.237010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.237129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.161 [2024-10-30 12:38:12.237154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.161 qpair failed and we were unable to recover it. 00:26:40.161 [2024-10-30 12:38:12.237271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.237297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.237416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.237441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.237526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.237551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.237675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.237700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.237842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.237867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.237991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.238016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.238108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.238134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.238221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.238246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.238378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.238406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.238535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.238561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.238646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.238677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.238769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.238797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.238918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.238944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.239030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.239056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.239172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.239199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.239326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.239353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.239440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.239466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.239614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.239640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.239720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.239747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.239856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.239882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.240002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.240027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.240111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.240138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.240226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.240252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.240381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.240406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.240533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.240558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.240671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.240697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.240782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.240811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.240923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.240949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.241033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.241060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.241175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.241200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.241311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.241351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.241456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.241496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.241589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.241616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.241725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.241750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.241865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.241890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.242002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.242029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.242178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.242206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.242319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.242350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.242443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.242473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.242613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.242640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.242775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.162 [2024-10-30 12:38:12.242801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.162 qpair failed and we were unable to recover it. 00:26:40.162 [2024-10-30 12:38:12.242979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.243006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.243098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.243125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.243224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.243270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.243420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.243448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.243537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.243563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.243676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.243737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.243962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.244015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.244135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.244164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.244250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.244285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.244403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.244436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.244522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.244550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.244669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.244695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.244784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.244810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.244920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.244946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.245062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.245091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.245247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.245294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.245436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.245465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.245588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.245615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.245727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.245752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.245869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.245894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.245986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.246012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.246158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.246186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.246275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.246301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.246440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.246466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.246556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.246582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.246725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.246751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.246906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.246966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.247080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.247107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.247225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.247252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.247357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.247384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.247463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.247489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.247605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.247631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.247740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.247766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.247902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.247928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.248020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.163 [2024-10-30 12:38:12.248048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.163 qpair failed and we were unable to recover it. 00:26:40.163 [2024-10-30 12:38:12.248166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.248192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.248324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.248354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.248467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.248492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.248583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.248609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.248751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.248777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.248870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.248896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.249006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.249031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.249118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.249145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.249283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.249321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.249410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.249437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.249523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.249550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.249677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.249726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.249893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.249919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.250027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.250053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.250139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.250169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.250269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.250308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.250398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.250426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.250570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.250597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.250679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.250705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.250788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.250814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.250956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.250981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.251072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.251097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.251216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.251243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.251378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.251406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.251499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.251526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.251615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.251640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.251749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.251774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.251864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.164 [2024-10-30 12:38:12.251889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.164 qpair failed and we were unable to recover it. 00:26:40.164 [2024-10-30 12:38:12.251984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.252013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.252129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.252156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.252245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.252279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.252376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.252402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.252490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.252516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.252653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.252719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.252810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.252838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.252929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.252958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.253075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.253103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.253248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.253279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.253370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.253395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.253483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.253509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.253599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.253627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.253778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.253809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.253903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.253931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.254074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.254100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.254212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.254238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.254364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.254389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.254482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.254508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.254620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.254646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.254756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.254781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.254871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.254900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.255047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.255072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.255161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.255186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.255305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.255333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.255454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.255479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.255592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.255617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.255743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.255771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.255885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.255910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.256026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.256052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.256155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.256181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.256277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.256305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.256432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.256458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.165 qpair failed and we were unable to recover it. 00:26:40.165 [2024-10-30 12:38:12.256541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.165 [2024-10-30 12:38:12.256567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.256677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.256702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.256820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.256846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.256963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.256994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.257093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.257132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.257296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.257336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.257432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.257459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.257547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.257573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.257749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.257775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.257859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.257886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.257971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.257996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.258115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.258145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.258226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.258253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.258354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.258381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.258490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.258516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.258665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.258723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.258805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.258832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.258918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.258945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.259064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.259092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.259213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.259239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.259374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.259406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.259529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.259556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.259692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.259756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.259840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.259866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.259952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.259979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.260090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.260116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.260206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.260233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.260340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.260368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.260507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.260533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.260648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.260675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.260763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.260789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.260911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.260949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.261071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.261100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.261216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.261244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.166 [2024-10-30 12:38:12.261358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.166 [2024-10-30 12:38:12.261384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.166 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.261498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.261524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.261637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.261663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.261779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.261804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.261918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.261948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.262104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.262144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.262266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.262294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.262408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.262434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.262550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.262577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.262725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.262751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.262825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.262852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.262967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.262995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.263116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.263146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.263276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.263308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.263411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.263437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.263551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.263577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.263665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.263690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.263832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.263857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.263959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.264023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.264117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.264146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.264265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.264293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.264384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.264410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.264525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.264552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.264634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.264660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.264741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.264767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.264882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.264908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.265019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.265045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.265193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.265220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.265342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.265371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.265466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.265496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.265623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.265650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.265791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.265817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.265940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.265967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.266080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.266107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.266211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.266249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.167 qpair failed and we were unable to recover it. 00:26:40.167 [2024-10-30 12:38:12.266399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.167 [2024-10-30 12:38:12.266427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.266516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.266543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.266631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.266657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.266746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.266773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.266892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.266918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.267039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.267067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.267206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.267245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.267385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.267413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.267497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.267522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.267637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.267662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.267751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.267776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.267864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.267891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.267981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.268008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.268094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.268121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.268239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.268274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.268427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.268453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.268577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.268603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.268725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.268751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.268841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.268873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.268976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.269014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.269127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.269155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.269274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.269301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.269418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.269444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.269531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.269557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.269686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.269712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.269830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.269856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.270003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.270028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.270151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.270179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.270272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.270300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.270404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.270442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.270537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.270564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.270654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.270681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.270804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.270831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.270912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.270938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.271057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.168 [2024-10-30 12:38:12.271084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.168 qpair failed and we were unable to recover it. 00:26:40.168 [2024-10-30 12:38:12.271202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.271229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.271365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.271394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.271481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.271508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.271626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.271652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.271743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.271769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.271873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.271898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.272008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.272034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.272116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.272142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.272253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.272286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.272400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.272427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.272522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.272553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.272667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.272692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.272830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.272856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.272971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.272996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.273137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.273177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.273349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.273388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.273509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.273537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.273654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.273680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.273794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.273820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.273930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.273955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.274071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.274099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.274221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.274247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.274367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.274392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.274483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.274509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.274587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.274613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.274703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.274728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.274845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.274872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.275019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.275045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.169 [2024-10-30 12:38:12.275129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.169 [2024-10-30 12:38:12.275159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.169 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.275279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.275315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.275397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.275424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.275537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.275563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.275675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.275701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.275811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.275838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.275925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.275953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.276041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.276069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.276183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.276209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.276359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.276385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.276474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.276500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.276609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.276635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.276753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.276778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.276863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.276889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.277001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.277028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.277171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.277199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.277344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.277372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.277459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.277484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.277567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.277593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.277730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.277755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.277898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.277924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.278047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.278074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.278166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.278199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.278328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.278355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.278471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.278497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.278641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.278667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.278760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.278787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.278906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.278932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.279019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.279047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.279164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.279191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.279277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.279303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.279420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.279446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.279561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.279587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.279669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.279694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.279807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.279832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.170 [2024-10-30 12:38:12.279922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.170 [2024-10-30 12:38:12.279947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.170 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.280069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.280096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.280194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.280221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.280346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.280386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.280536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.280565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.280710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.280736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.280850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.280876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.280995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.281021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.281137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.281165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.281297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.281336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.281462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.281489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.281602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.281628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.281721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.281747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.281855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.281880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.281987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.282020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.282107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.282132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.282215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.282242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.282368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.282395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.282506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.282532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.282649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.282675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.282770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.282797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.282913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.282939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.283057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.283085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.283207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.283233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.283349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.283377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.283465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.283490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.283578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.283603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.283721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.283746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.283864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.283890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.283969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.283995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.284088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.284116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.284229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.284261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.284387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.284414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.284522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.284549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.171 [2024-10-30 12:38:12.284645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.171 [2024-10-30 12:38:12.284671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.171 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.284786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.284813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.284902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.284928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.285041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.285068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.285198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.285237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.285368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.285396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.285495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.285534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.285636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.285665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.285829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.285877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.286080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.286106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.286227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.286254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.286415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.286442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.286589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.286615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.286758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.286808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.286922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.286949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.287067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.287094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.287184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.287210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.287355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.287381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.287466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.287491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.287612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.287638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.287780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.287810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.287928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.287956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.288043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.288070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.288191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.288217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.288366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.288393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.288508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.288534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.288617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.288644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.288785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.288812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.288901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.288940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.289065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.289094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.289180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.289208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.289345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.289373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.289517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.289543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.289671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.172 [2024-10-30 12:38:12.289697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.172 qpair failed and we were unable to recover it. 00:26:40.172 [2024-10-30 12:38:12.289893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.289943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.290055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.290081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.290198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.290224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.290346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.290374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.290508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.290547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.290670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.290724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.290880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.290930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.291039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.291100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.291182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.291207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.291306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.291333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.291420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.291445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.291560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.291586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.291700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.291726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.291814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.291840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.291929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.291955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.292096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.292136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.292269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.292297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.292425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.292452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.292601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.292627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.292720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.292746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.292890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.292939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.293030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.293056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.293208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.293247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.293358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.293387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.293474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.293501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.293587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.293614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.293761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.293837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.294045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.294072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.294158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.294186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.294277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.294313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.294443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.294468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.294544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.294570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.294652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.294679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.173 [2024-10-30 12:38:12.294788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.173 [2024-10-30 12:38:12.294814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.173 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.294950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.294976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.295092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.295119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.295271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.295310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.295470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.295509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.295589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.295617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.295710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.295737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.295907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.295957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.296038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.296064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.296141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.296167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.296284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.296310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.296426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.296452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.296568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.296594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.296732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.296758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.296881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.296921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.297046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.297073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.297185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.297212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.297327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.297355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.297436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.297462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.297576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.297602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.297712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.297744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.297839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.297878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.298013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.298053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.298175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.298202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.298297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.298324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.298408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.298434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.298552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.298578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.298669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.298697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.298884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.298938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.299050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.174 [2024-10-30 12:38:12.299076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.174 qpair failed and we were unable to recover it. 00:26:40.174 [2024-10-30 12:38:12.299219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.299245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.299356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.299383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.299465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.299491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.299602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.299629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.299722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.299749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.299860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.299886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.299967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.299993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.300082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.300108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.300222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.300253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.300362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.300389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.300502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.300528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.300641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.300667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.300753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.300778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.300890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.300917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.301027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.301054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.301143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.301169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.301253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.301284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.301389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.301416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.301496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.301521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.301630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.301656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.301769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.301795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.301879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.301905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.301984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.302010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.302118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.302143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.302227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.302251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.302340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.302365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.302442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.302466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.302561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.302587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.302726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.302750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.302869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.302895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.302997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.303036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.303160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.303188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.303308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.303337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.303450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.303477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.175 [2024-10-30 12:38:12.303575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.175 [2024-10-30 12:38:12.303602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.175 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.303713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.303739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.303855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.303881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.303969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.303997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.304079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.304105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.304198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.304226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.304357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.304397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.304519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.304546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.304629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.304655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.304740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.304767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.304904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.304932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.305020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.305047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.305198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.305224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.305339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.305367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.305447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.305473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.305586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.305612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.305732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.305784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.305868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.305894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.306038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.306064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.306180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.306206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.306294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.306323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.306419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.306445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.306536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.306562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.306672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.306704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.306840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.306879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.307036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.307096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.307212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.307240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.307365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.307392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.307498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.307525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.307642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.307669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.307789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.307817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.307910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.307938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.308056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.308085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.308202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.308228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.308318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.176 [2024-10-30 12:38:12.308344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.176 qpair failed and we were unable to recover it. 00:26:40.176 [2024-10-30 12:38:12.308456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.308483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.308563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.308590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.308707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.308733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.308869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.308896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.309014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.309042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.309134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.309162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.309254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.309286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.309398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.309424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.309507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.309533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.309672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.309698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.309807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.309834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.309943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.309969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.310098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.310136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.310227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.310262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.310355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.310383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.310471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.310498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.310618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.310668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.310785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.310834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.310973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.310998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.311087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.311116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.311202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.311230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.311358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.311386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.311480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.311507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.311629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.311680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.311759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.311784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.311894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.311920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.312029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.312055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.312173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.312198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.312312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.312340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.312429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.312456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.312600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.312628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.312716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.312742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.312861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.312887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.177 [2024-10-30 12:38:12.313030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.177 [2024-10-30 12:38:12.313056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.177 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.313166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.313192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.313278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.313305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.313399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.313425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.313540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.313566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.313656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.313683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.313771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.313799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.313921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.313949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.314051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.314090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.314196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.314223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.314375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.314401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.314521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.314546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.314697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.314756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.314909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.314958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.315068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.315094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.315202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.315229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.315341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.315380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.315501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.315529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.315646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.315677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.315819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.315845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.315961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.315989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.316109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.316137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.316263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.316295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.316405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.316432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.316516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.316542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.316630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.316656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.316767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.316793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.316934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.316960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.317078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.317105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.317223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.317247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.317372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.317397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.317485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.317510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.317690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.317738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.317884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.317940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.178 [2024-10-30 12:38:12.318082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.178 [2024-10-30 12:38:12.318110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.178 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.318229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.318263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.318365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.318391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.318476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.318501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.318641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.318667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.318846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.318903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.319091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.319118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.319205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.319233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.319363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.319390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.319472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.319498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.319640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.319667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.319870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.319924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.320018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.320046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.320192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.320220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.320325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.320364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.320455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.320483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.320608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.320634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.320803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.320858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.321011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.321057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.321149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.321175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.321288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.321315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.321433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.321459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.321604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.321631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.321723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.321749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.321836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.321862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.321999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.322025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.322138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.322164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.322311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.322351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.322474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.322506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.179 qpair failed and we were unable to recover it. 00:26:40.179 [2024-10-30 12:38:12.322620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.179 [2024-10-30 12:38:12.322646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.322729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.322756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.322939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.322987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.323119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.323160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.323363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.323392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.323531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.323558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.323636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.323662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.323809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.323861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.324003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.324029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.324143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.324169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.324295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.324323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.324435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.324460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.324551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.324578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.324691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.324717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.324861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.324900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.325023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.325052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.325193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.325220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.325370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.325397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.325486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.325512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.325655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.325681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.325772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.325798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.325882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.325908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.326017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.326042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.326166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.326194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.326357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.326387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.326498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.326526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.326640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.326667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.326784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.326810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.326898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.326924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.327059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.327084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.327212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.327238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.327343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.327369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.327509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.327536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.327679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.327707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.180 qpair failed and we were unable to recover it. 00:26:40.180 [2024-10-30 12:38:12.327824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.180 [2024-10-30 12:38:12.327851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.328001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.328027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.328141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.328167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.328281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.328308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.328398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.328425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.328538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.328569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.328700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.328726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.328831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.328857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.328968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.328993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.329090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.329129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.329249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.329285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.329382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.329411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.329524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.329551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.329633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.329660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.329805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.329853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.329968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.329994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.330142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.330181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.330316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.330356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.330443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.330470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.330593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.330620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.330795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.330843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.330936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.330962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.331078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.331105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.331198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.331227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.331389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.331428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.331548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.331576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.331783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.331836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.331976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.332025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.332138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.332165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.332277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.332304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.332422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.332449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.332605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.332632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.332727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.332757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.332888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.332927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.181 [2024-10-30 12:38:12.333022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.181 [2024-10-30 12:38:12.333049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.181 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.333176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.333216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.333324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.333353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.333496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.333523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.333667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.333693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.333871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.333922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.334026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.334087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.334215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.334262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.334385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.334412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.334533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.334558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.334648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.334672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.334842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.334892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.335066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.335121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.335270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.335298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.335407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.335433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.335629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.335656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.335800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.335848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.335936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.335962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.336107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.336134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.336230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.336262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.336378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.336402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.336489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.336515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.336627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.336652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.336746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.336770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.336863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.336890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.336995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.337035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.337130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.337158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.337280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.337307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.337419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.337445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.337588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.337614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.337703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.337731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.337814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.337843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.337993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.338031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.182 [2024-10-30 12:38:12.338152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.182 [2024-10-30 12:38:12.338179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.182 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.338275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.338303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.338390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.338416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.338552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.338602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.338743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.338769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.338851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.338883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.339003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.339032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.339167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.339206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.339310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.339339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.339430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.339456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.339564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.339589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.339728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.339771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.339853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.339879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.339989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.340015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.340152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.340178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.340293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.340320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.340437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.340463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.340558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.340587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.340680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.340706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.340836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.340862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.340944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.340971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.341060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.341085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.341211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.341250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.341361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.341389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.341503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.341528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.341645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.341672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.341761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.341787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.341871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.341898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.342033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.342059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.342177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.342202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.342329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.342356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.342443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.342468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.342619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.342649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.342791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.342816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.342919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.183 [2024-10-30 12:38:12.342971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.183 qpair failed and we were unable to recover it. 00:26:40.183 [2024-10-30 12:38:12.343098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.343137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.343270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.343297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.343384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.343411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.343546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.343611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.343752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.343778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.343996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.344064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.344187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.344216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.344321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.344349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.344464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.344490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.344601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.344627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.344752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.344791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.344949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.344978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.345089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.345115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.345253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.345286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.345428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.345454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.345537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.345564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.345649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.345677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.345820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.345848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.345937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.345965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.346059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.346085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.346201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.346225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.346324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.346350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.346436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.346460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.346551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.346576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.346669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.346694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.346835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.346861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.346984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.347008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.347095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.347122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.184 qpair failed and we were unable to recover it. 00:26:40.184 [2024-10-30 12:38:12.347272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.184 [2024-10-30 12:38:12.347298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.347384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.347411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.347524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.347550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.347639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.347666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.347787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.347813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.347948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.347999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.348114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.348141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.348235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.348267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.348383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.348408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.348521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.348547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.348642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.348668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.348778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.348805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.348921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.348948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.349072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.349099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.349208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.349232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.349349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.349374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.349459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.349485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.349597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.349623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.349706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.349731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.349825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.349849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.349942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.349966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.350054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.350079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.350186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.350211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.350343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b72f30 is same with the state(6) to be set 00:26:40.185 [2024-10-30 12:38:12.350488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.350527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.350617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.350646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.350739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.350766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.350873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.350899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.350987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.351013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.351090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.351116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.351225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.351252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.351344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.351369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.351446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.351471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.351585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.351610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.351689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.351713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.351804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.351828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.185 qpair failed and we were unable to recover it. 00:26:40.185 [2024-10-30 12:38:12.351914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.185 [2024-10-30 12:38:12.351939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.352064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.352089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.352204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.352229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.352351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.352381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.352499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.352527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.352645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.352671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.352783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.352809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.352917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.352943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.353052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.353078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.353193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.353219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.353324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.353364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.353497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.353536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.353661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.353688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.353798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.353825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.353912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.353944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.354063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.354091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.354182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.354221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.354322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.354349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.354445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.354473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.354598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.354644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.354770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.354795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.354997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.355054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.355142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.355170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.355312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.355339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.355457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.355483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.355622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.355648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.355743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.355769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.355843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.355869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.355963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.355989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.356145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.356184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.356291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.356319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.356402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.356431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.356527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.356553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.356670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.356696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.186 qpair failed and we were unable to recover it. 00:26:40.186 [2024-10-30 12:38:12.356819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.186 [2024-10-30 12:38:12.356858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.357023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.357049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.357147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.357178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.357284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.357324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.357418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.357446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.357545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.357572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.357718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.357763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.357912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.357971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.358057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.358084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.358159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.358185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.358289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.358328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.358429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.358455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.358543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.358569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.358641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.358666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.358811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.358836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.358950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.359005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.359097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.359124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.359220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.359249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.359376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.359403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.359518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.359544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.359642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.359669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.359766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.359793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.359910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.359938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.360023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.360050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.360138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.360165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.360246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.360278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.360389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.360414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.360493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.360518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.360599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.360625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.360750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.360789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.360941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.360969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.361054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.361080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.361274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.361300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.361413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.361440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.361539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.187 [2024-10-30 12:38:12.361566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.187 qpair failed and we were unable to recover it. 00:26:40.187 [2024-10-30 12:38:12.361699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.361749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.361895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.361942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.362061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.362089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.362198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.362225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.362346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.362373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.362466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.362492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.362593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.362632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.362751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.362779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.362868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.362896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.363016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.363044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.363158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.363183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.363280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.363307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.363398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.363429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.363572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.363598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.363715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.363741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.363851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.363879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.364022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.364048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.364190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.364217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.364302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.364329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.364471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.364497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.364669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.364723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.364902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.364930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.365019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.365045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.365128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.365155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.365269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.365296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.365417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.365443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.365566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.365594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.365743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.365771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.365897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.365935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.366079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.366106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.366248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.366283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.366401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.366426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.366518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.188 [2024-10-30 12:38:12.366544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.188 qpair failed and we were unable to recover it. 00:26:40.188 [2024-10-30 12:38:12.366662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.366689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.366771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.366797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.366888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.366915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.367034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.367060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.367187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.367215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.367323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.367351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.367454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.367493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.367638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.367665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.367811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.367837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.367925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.367952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.368065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.368091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.368219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.368265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.368358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.368385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.368471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.368499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.368591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.368618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.368696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.368722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.368835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.368860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.368976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.369003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.369091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.369119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.369265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.369297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.369416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.369442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.369529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.369555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.369668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.369695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.369817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.369845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.369959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.369986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.370069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.370096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.370209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.370234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.370363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.370392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.370484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.189 [2024-10-30 12:38:12.370510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.189 qpair failed and we were unable to recover it. 00:26:40.189 [2024-10-30 12:38:12.370647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.370672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.370784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.370809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.370911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.370948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.371099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.371126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.371222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.371249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.371349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.371374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.371512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.371537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.371679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.371703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.371803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.371855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.371945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.371970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.372045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.372070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.372186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.372212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.372319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.372356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.372448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.372475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.372559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.372584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.372699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.372724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.372860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.372885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.373000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.373026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.373143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.373167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.373332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.373370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.373466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.373491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.373569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.373593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.373764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.373810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.373994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.374019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.374131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.374155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.374294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.374321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.374415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.374442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.374533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.374558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.374670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.374716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.374792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.374817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.374936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.374963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.375054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.375080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.190 qpair failed and we were unable to recover it. 00:26:40.190 [2024-10-30 12:38:12.375198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.190 [2024-10-30 12:38:12.375224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.375345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.375372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.375461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.375486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.375597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.375622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.375730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.375788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.375972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.376026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.376138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.376163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.376247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.376278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.376394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.376422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.376567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.376594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.376714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.376741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.376855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.376881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.377010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.377035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.377145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.377170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.377287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.377314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.377420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.377446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.377569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.377595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.377741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.377766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.377895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.377923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.378032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.378058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.378169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.378195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.378313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.378340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.378431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.378459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.378623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.378661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.378753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.378781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.378896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.378927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.379012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.379038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.379152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.379177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.379267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.379294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.379432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.379457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.379544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.379569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.379652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.379677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.379787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.379812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.379952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.379977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.380057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.380082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.191 [2024-10-30 12:38:12.380194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.191 [2024-10-30 12:38:12.380219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.191 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.380311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.380338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.380459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.380487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.380654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.380689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.380797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.380823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.380937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.380963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.381059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.381087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.381201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.381229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.381373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.381411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.381541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.381566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.381709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.381734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.381823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.381854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.381972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.382000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.382122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.382148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.382272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.382310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.382427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.382453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.382545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.382570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.382653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.382680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.382811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.382862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.382949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.382976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.383119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.383144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.383265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.383293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.383381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.383407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.383489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.383515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.383597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.383622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.383732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.383757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.383870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.383895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.384039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.384064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.384153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.384179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.384268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.384293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.384407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.384437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.384521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.384546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.384628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.384653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.384738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.384766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.192 [2024-10-30 12:38:12.384885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.192 [2024-10-30 12:38:12.384913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.192 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.385004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.385032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.385147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.385172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.385266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.385291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.385384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.385408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.385493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.385518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.385596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.385620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.385730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.385755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.385867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.385891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.386019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.386058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.386164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.386191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.386295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.386321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.386429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.386462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.386546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.386571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.386655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.386681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.386823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.386850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.387002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.387031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.387146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.387174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.387275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.387302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.387421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.387447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.387561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.387586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.387701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.387726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.387838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.387863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.387986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.388017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.388165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.388192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.388315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.388343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.388489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.388520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.388648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.388703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.388865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.388922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.389043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.389070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.389204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.389243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.389382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.389410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.389517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.389543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.389661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.389687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.389783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.389809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.389948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.389974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.193 [2024-10-30 12:38:12.390062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.193 [2024-10-30 12:38:12.390089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.193 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.390246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.390291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.390386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.390413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.815888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.815934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.816061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.816096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.816240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.816292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.816418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.816451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.816563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.816598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.816748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.816782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.816960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.816993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.817140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.817174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.817327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.817362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.817512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.817547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.817679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.817712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.817841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.817875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.818018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.818051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.818159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.818194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.818339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.818375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.818502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.818536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.818680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.818713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.818860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.818895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.819078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.819112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.819242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.819285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.819405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.819439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.819552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.819588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.819694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.819728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.819860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.819894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.820009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.820049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.820196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.820230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.820385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.820419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.820539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.820573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.820749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.820784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.194 qpair failed and we were unable to recover it. 00:26:40.194 [2024-10-30 12:38:12.820928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.194 [2024-10-30 12:38:12.820962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.821100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.821134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.821253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.821312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.821459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.821496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.821647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.821683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.821799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.821834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.822077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.822160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.822309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.822346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.822470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.822505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.822662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.822698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.822848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.822884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.822998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.823076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.823290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.823341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.823533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.823598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.195 [2024-10-30 12:38:12.823846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.195 [2024-10-30 12:38:12.823910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.195 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.824084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.824118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.824266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.824302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.824419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.824452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.824965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.825029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.825170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.825206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.825364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.825400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.825559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.825593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.825735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.825789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.825918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.825957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.826157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.826220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.826645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.826713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.475 [2024-10-30 12:38:12.827008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.475 [2024-10-30 12:38:12.827073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.475 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.827390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.827451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.827646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.827706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.827951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.828011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.828312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.828377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.828641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.828705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.828959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.829023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.829232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.829320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.829570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.829630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.829820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.829881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.830064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.830122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.830360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.830428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.830621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.830683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.830969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.831028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.831303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.831368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.831545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.831606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.831876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.831935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.832137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.832197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.832463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.832525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.832802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.832860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.833138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.833198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.833409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.833470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.833675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.833737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.834002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.834092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.834401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.834471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.834730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.834795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.835086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.835146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.835352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.835412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.835630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.835690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.835975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.836040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.836290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.836356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.836551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.476 [2024-10-30 12:38:12.836616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.476 qpair failed and we were unable to recover it. 00:26:40.476 [2024-10-30 12:38:12.836864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.836928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.837194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.837272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.837473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.837537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.837738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.837803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.838014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.838077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.838383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.838449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.838694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.838758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.838980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.839042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.839289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.839354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.839593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.839657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.839950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.840014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.840275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.840340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.840553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.840618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.840863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.840926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.841175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.841238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.841511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.841576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.841824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.841888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.842135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.842198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.842480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.842555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.842839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.842903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.843107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.843171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.843470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.843536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.843727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.843791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.844079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.844143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.844362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.844427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.844621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.844685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.844979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.845043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.845289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.845354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.845607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.845671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.845920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.845984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.477 [2024-10-30 12:38:12.846230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.477 [2024-10-30 12:38:12.846306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.477 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.846616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.846680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.846880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.846946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.847202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.847279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.847493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.847557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.847800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.847865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.848132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.848198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.848479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.848546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.848850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.848915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.849201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.849297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.849595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.849659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.849904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.849967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.850202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.850283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.850583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.850647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.850893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.850959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.851185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.851277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.851570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.851633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.851882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.851946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.852157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.852221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.852457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.852521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.852721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.852787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.853080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.853144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.853403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.853469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.853766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.853830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.854131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.854194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.854472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.854536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.854749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.854814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.855051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.855114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.855338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.855403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.855644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.855708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.855957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.856020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.856313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.856378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.478 qpair failed and we were unable to recover it. 00:26:40.478 [2024-10-30 12:38:12.856635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.478 [2024-10-30 12:38:12.856698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.856948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.857010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.857277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.857342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.857586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.857649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.857938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.858001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.858303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.858368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.858657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.858721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.858965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.859028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.859246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.859330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.859629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.859692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.859940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.860015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.860286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.860353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.860614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.860678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.860973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.861036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.861323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.861389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.861636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.861699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.861889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.861952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.862210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.862285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.862548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.862611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.862851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.862915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.863198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.863273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.863506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.863569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.863819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.863886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.864186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.864250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.864574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.864638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.864890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.864954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.865182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.865246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.865534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.865596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.865844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.865907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.866138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.866202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.866475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.479 [2024-10-30 12:38:12.866539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.479 qpair failed and we were unable to recover it. 00:26:40.479 [2024-10-30 12:38:12.866832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.866895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.867154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.867218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.867419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.867484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.867737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.867801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.868088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.868151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.868395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.868460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.868761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.868824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.869098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.869162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.869456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.869521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.869729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.869795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.870089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.870154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.870418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.870483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.870779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.870842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.871089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.871155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.871436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.871501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.871797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.871861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.872153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.872217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.872466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.872531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.872782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.872846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.873165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.873228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.873530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.873595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.873856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.873920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.874177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.874240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.874472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.874540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.874787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.874851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.875160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.875224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.875540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.875617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.875857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.875920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.876197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.876279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.876535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.876599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.480 qpair failed and we were unable to recover it. 00:26:40.480 [2024-10-30 12:38:12.876883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.480 [2024-10-30 12:38:12.876948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.877144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.877208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.877504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.877569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.877859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.877922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.878203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.878288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.878548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.878613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.878915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.878978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.879288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.879362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.879593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.879657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.879858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.879922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.880149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.880213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.880448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.880513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.880766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.880829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.881086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.881150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.881415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.881483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.881690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.881752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.882088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.882155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.882464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.882540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.882839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.882903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.883152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.883217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.883525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.883600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.883896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.883960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.884172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.884237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.884498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.884561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.884813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.884877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.885135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.481 [2024-10-30 12:38:12.885200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.481 qpair failed and we were unable to recover it. 00:26:40.481 [2024-10-30 12:38:12.885574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.885673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.885909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.885979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.886294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.886367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.886631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.886701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.886958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.887024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.887334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.887401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.887692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.887757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.888016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.888092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.888360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.888427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.888717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.888782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.889037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.889106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.889371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.889440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.889707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.889775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.889972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.890037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.890285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.890351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.890610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.890675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.890942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.891006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.891214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.891302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.891579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.891645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.891941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.892005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.892306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.892374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.892673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.892738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.893010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.893074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.893347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.893413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.893700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.893765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.894055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.894120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.894374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.894440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.894652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.894717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.894978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.895042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.895336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.482 [2024-10-30 12:38:12.895402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.482 qpair failed and we were unable to recover it. 00:26:40.482 [2024-10-30 12:38:12.895647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.895712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.895961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.896035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.896290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.896359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.896566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.896634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.896926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.896991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.897292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.897359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.897617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.897683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.897930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.897994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.898206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.898291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.898583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.898649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.898952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.899018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.899210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.899291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.899591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.899655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.899951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.900017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.900281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.900348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.900669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.900735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.901025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.901091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.901382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.901448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.901714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.901782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.902042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.902108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.902397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.902462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.902709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.902774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.903060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.903126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.903360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.903425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.903723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.903787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.904080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.904145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.904440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.904506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.904754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.904820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.905091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.905157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.905445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.905512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.483 qpair failed and we were unable to recover it. 00:26:40.483 [2024-10-30 12:38:12.905810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.483 [2024-10-30 12:38:12.905877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.906166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.906232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.906501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.906566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.906858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.906923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.907222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.907307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.907529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.907596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.907812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.907878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.908173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.908237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.908533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.908598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.908896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.908962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.909279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.909346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.909636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.909713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.909922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.909987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.910204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.910300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.910553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.910618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.910909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.910974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.911194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.911279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.911543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.911609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.911860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.911927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.912153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.912218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.912497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.912575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.912886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.912950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.913252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.913335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.913553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.913619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.913869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.913936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.914208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.914299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.914618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.914684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.914980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.915044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.915273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.915340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.915594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.915662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.915914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.915978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.484 qpair failed and we were unable to recover it. 00:26:40.484 [2024-10-30 12:38:12.916168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.484 [2024-10-30 12:38:12.916233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.916478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.916546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.916811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.916875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.917115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.917182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.917445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.917512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.917757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.917824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.918115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.918180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.918467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.918575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.918891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.918960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.919211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.919303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.919565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.919632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.919843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.919911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.920129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.920195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.920514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.920583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.920833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.920898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.921161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.921230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.921500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.921567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.921819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.921884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.922071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.922136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.922381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.922449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.922662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.922740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.923044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.923111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.923393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.923460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.923726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.923791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.924082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.924149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.924411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.924477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.924724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.924789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.924964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.925029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.925228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.925307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.925520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.925586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.925811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.485 [2024-10-30 12:38:12.925880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.485 qpair failed and we were unable to recover it. 00:26:40.485 [2024-10-30 12:38:12.926134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.926201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.926481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.926548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.926836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.926901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.927164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.927229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.927506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.927573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.927838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.927904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.928147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.928212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.928484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.928550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.928852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.928918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.929161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.929228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.929496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.929562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.929752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.929817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.930114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.930179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.930439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.930506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.930751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.930819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.931080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.931146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.931420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.931487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.931792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.931857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.932142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.932207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.932481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.932547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.932837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.932902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.933148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.933214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.933492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.933558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.933807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.933874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.934075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.934140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.934343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.934409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.934649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.486 [2024-10-30 12:38:12.934715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.486 qpair failed and we were unable to recover it. 00:26:40.486 [2024-10-30 12:38:12.935017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.935082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.935301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.935370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.935591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.935654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.935933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.935999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.936288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.936356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.936645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.936710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.936914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.936978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.937285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.937351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.937637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.937702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.937957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.938021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.938320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.938388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.938651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.938719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.938977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.939043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.939292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.939360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.939649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.939715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.939964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.940030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.940334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.940402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.940641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.940706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.940954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.941021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.941326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.941393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.941692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.941757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.942016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.942081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.942295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.942364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.942615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.942681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.942931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.942997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.943220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.943305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.943559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.943625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.943838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.943904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.944198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.944283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.944582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.944658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.944947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.487 [2024-10-30 12:38:12.945013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.487 qpair failed and we were unable to recover it. 00:26:40.487 [2024-10-30 12:38:12.945300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.945366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.945619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.945685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.945945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.946010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.946207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.946287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.946581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.946647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.946941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.947007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.947291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.947359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.947571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.947637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.947864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.947930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.948179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.948244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.948554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.948621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.948876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.948941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.949247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.949331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.949614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.949680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.949935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.950001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.950248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.950342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.950637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.950702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.950954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.951019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.951298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.951366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.951580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.951645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.951934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.952000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.952311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.952379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.952583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.952647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.952862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.952929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.953223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.953302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.953574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.953641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.953897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.953964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.954170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.954237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.954510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.954576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.954826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.954892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.955140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.955207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.488 [2024-10-30 12:38:12.955526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.488 [2024-10-30 12:38:12.955624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.488 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.955935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.956004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.956282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.956352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.956596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.956661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.956903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.956967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.957184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.957253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.957583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.957648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.957855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.957946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.958196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.958283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.958582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.958647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.958868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.958935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.959176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.959240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.959513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.959579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.959883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.959947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.960196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.960282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.960549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.960612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.960904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.960967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.961279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.961344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.961596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.961659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.961946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.962009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.962309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.962375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.962625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.962690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.962982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.963046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.963336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.963401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.963649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.963713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.964004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.964067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.964369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.964434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.964730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.964796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.965050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.965118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.965374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.965439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.965701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.965764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.966056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.966120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.489 qpair failed and we were unable to recover it. 00:26:40.489 [2024-10-30 12:38:12.966385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.489 [2024-10-30 12:38:12.966451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.966696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.966761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.967035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.967101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.967366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.967432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.967694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.967758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.968043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.968106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.968354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.968419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.968704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.968770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.969019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.969082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.969343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.969408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.969658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.969723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.969985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.970050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.970332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.970398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.970656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.970721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.971004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.971067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.971322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.971398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.971674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.971737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.971979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.972043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.972236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.972315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.972568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.972630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.972889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.972952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.973194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.973273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.973491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.973555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.973837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.973900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.974187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.974251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.974526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.974589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.974844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.974911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.975151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.975215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.975524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.975590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.975819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.975884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.976135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.976199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.490 qpair failed and we were unable to recover it. 00:26:40.490 [2024-10-30 12:38:12.976415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.490 [2024-10-30 12:38:12.976480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.976738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.976802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.977045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.977110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.977354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.977420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.977652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.977716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.977981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.978045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.978252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.978331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.978582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.978646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.978948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.979012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.979312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.979380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.979673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.979737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.979995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.980059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.980364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.980431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.980685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.980750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.980998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.981062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.981309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.981375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.981581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.981648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.981932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.981997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.982206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.982285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.982547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.982612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.982899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.982963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.983216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.983296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.983530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.983565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.983717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.983751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.983906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.983946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.984094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.984128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.984286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.984337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.984440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.984472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.984606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.984646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.984779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.984812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.985004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.985068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.985253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.985334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.491 qpair failed and we were unable to recover it. 00:26:40.491 [2024-10-30 12:38:12.985473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.491 [2024-10-30 12:38:12.985505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.985622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.985654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.985818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.985850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.986014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.986076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.986326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.986359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.986470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.986501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.986619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.986651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.986818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.986850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.986954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.986988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.987277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.987332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.987498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.987530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.987667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.987700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.987952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.988015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.988237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.988320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.988428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.988460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.988609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.988640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.988805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.988837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.988947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.988980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.989180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.989245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.989448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.989481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.989632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.989664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.989822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.989856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.989998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.990032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.990283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.990339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.990481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.990513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.990649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.990681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.990787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.492 [2024-10-30 12:38:12.990820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.492 qpair failed and we were unable to recover it. 00:26:40.492 [2024-10-30 12:38:12.990986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.991053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.991309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.991341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.991481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.991513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.991648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.991680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.991784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.991817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.991925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.991963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.992196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.992276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.992470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.992502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.992639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.992672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.992922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.992992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.993169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.993227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.993375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.993409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.993528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.993560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.993686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.993734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.993874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.993940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.994079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.994110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.994243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.994283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.994418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.994451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.994600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.994632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.994780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.994811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.994905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.994937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.995065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.995095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.995196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.995227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.995351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.995382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.995521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.995552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.995659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.995691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.995825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.995856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.995992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.996027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.996136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.996167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.996316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.996349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.996491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.996523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.493 [2024-10-30 12:38:12.996662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.493 [2024-10-30 12:38:12.996693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.493 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.996795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.996834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.997001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.997061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.997224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.997263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.997377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.997409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.997555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.997602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.997795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.997854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.997960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.997992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.998102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.998133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.998252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.998297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.998437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.998469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.998605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.998636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.998774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.998806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.998972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.999003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.999136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.999171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.999351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.999384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.999521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.999554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.999681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.999713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:12.999830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:12.999862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.000001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.000035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.000166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.000200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.000319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.000353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.000542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.000609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.000791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.000844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.001046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.001103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.001234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.001275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.001474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.001536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.001754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.001808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.002016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.002054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.002151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.002183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.002365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.002422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.002595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.002646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.002889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.494 [2024-10-30 12:38:13.002942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.494 qpair failed and we were unable to recover it. 00:26:40.494 [2024-10-30 12:38:13.003081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.003112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.003272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.003305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.003405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.003436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.003585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.003648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.003825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.003888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.003998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.004030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.004193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.004225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.004412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.004466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.004716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.004770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.004972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.005027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.005169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.005201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.005393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.005446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.005604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.005652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.005784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.005843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.006012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.006044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.006184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.006215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.006365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.006396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.006527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.006561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.006688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.006719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.006827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.006858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.007017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.007049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.007182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.007214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.007330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.007367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.007531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.007563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.007701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.007732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.007862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.007894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.008032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.008063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.008202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.008232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.008357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.008389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.008525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.008556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.008694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.495 [2024-10-30 12:38:13.008726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.495 qpair failed and we were unable to recover it. 00:26:40.495 [2024-10-30 12:38:13.008864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.008896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.009027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.009059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.009166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.009198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.009350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.009381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.009543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.009574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.009686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.009719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.009852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.009883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.010024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.010057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.010218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.010250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.010367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.010399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.010540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.010571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.010667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.010698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.010809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.010840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.010948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.010979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.011110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.011142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.011286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.011318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.011456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.011488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.011602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.011634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.011762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.011794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.011931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.011962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.012073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.012105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.012209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.012241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.012400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.012433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.012611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.012644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.012756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.012787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.012888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.012920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.013086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.013117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.013290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.013322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.013459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.013513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.013674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.013706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.013816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.496 [2024-10-30 12:38:13.013847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.496 qpair failed and we were unable to recover it. 00:26:40.496 [2024-10-30 12:38:13.014011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.014043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.014160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.014193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.014331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.014380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.014545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.014577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.014764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.014829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.014939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.014971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.015107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.015138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.015276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.015308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.015446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.015477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.015627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.015675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.015776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.015807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.015976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.016007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.016141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.016172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.016324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.016390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.016511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.016575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.016743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.016775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.016912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.016944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.017111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.017143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.017280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.017396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.017718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.017787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.018052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.018118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.018427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.018494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.018672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.018737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.018983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.019050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.019219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.019252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.019399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.019430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.019542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.019572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.497 [2024-10-30 12:38:13.019672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.497 [2024-10-30 12:38:13.019704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.497 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.019846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.019878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.020042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.020074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.020190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.020223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.020371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.020403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.020551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.020584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.020690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.020720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.020886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.020918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.021036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.021068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.021202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.021233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.021388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.021420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.021555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.021586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.021718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.021749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.021909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.021940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.022043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.022075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.022189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.022221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.022368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.022400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.022540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.022572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.022683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.022715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.022817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.022850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.022954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.022986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.023091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.023123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.023297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.023330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.023465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.023496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.023599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.023631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.023792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.023823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.023928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.023959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.024130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.024161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.024314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.024350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.024459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.024490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.498 [2024-10-30 12:38:13.024623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.498 [2024-10-30 12:38:13.024655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.498 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.024791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.024822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.024931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.024963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.025064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.025095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.025184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.025216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.025319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.025352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.025454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.025485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.025598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.025629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.025718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.025749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.025863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.025912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.026068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.026117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.026274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.026311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.026461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.026497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.026607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.026641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.026776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.026810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.026993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.027049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.027183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.027214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.027448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.027502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.027605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.027637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.027771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.027823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.027963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.027994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.028098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.028129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.028282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.028367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.028639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.028706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.028936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.028997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.029214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.029252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.029398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.029430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.029546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.029578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.029712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.029745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.029894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.029956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.030197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.030274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.499 qpair failed and we were unable to recover it. 00:26:40.499 [2024-10-30 12:38:13.030467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.499 [2024-10-30 12:38:13.030499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.030629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.030661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.030803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.030834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.031003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.031066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.031282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.031331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.031466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.031498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.031625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.031657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.031794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.031827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.031939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.031973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.032155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.032188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.032330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.032364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.032470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.032503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.032636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.032668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.032774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.032842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.033064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.033097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.033221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.033254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.033404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.033437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.033548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.033582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.033718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.033750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.033892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.033924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.034044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.034110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.034254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.034301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.034468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.034502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.034762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.034828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.035121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.035186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.035385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.035418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.035541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.035608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.035912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.035977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.036308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.036342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.500 qpair failed and we were unable to recover it. 00:26:40.500 [2024-10-30 12:38:13.036448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.500 [2024-10-30 12:38:13.036480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.036620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.036652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.036754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.036803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.036900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.036934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.037106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.037140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.037352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.037392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.037499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.037533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.037644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.037678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.037842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.037874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.038091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.038159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.038387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.038420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.038553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.038585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.038722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.038754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.038996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.039055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.039247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.039320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.039591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.039651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.039841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.039900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.040154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.040188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.040324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.040358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.040596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.040657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.040896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.040956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.041221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.041319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.041554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.041616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.041890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.041949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.042187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.042241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.042368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.042401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.042521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.042554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.042759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.042818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.043084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.501 [2024-10-30 12:38:13.043148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.501 qpair failed and we were unable to recover it. 00:26:40.501 [2024-10-30 12:38:13.043456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.043517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.043797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.043855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.044070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.044131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.044333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.044394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.044632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.044691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.044972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.045032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.045284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.045347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.045579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.045640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.045837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.045896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.046122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.046156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.046271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.046305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.046500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.046559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.046791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.046849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.047070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.047103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.047251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.047291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.047502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.047562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.047837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.047906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.048126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.048190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.048462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.048521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.048772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.048805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.048975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.049031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.049246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.049338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.049566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.049625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.049824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.049886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.050125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.050183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.050421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.050480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.050749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.050808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.051065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.051131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.051442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.051508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.051787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.051852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.052088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.052152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.502 qpair failed and we were unable to recover it. 00:26:40.502 [2024-10-30 12:38:13.052497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.502 [2024-10-30 12:38:13.052563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.052827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.052894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.053186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.053246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.053530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.053591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.053814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.053872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.054100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.054161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.054448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.054510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.054751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.054784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.055029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.055088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.055287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.055349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.055592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.055651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.055890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.055949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.056185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.056245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.056476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.056535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.056811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.056872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.057143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.057203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.057513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.057574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.057847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.057906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.058105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.058164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.058414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.058481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.058777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.058842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.059104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.059168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.059485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.059550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.059804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.059871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.060165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.060229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.060512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.060588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.060848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.060913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.061177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.061242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.061525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.061597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.061794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.061859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.503 [2024-10-30 12:38:13.062101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.503 [2024-10-30 12:38:13.062166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.503 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.062444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.062509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.062754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.062818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.063094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.063159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.063434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.063499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.063762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.063830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.064115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.064181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.064422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.064487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.064783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.064848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.065047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.065113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.065398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.065465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.065681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.065746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.065988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.066052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.066253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.066332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.066571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.066634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.066924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.066988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.067284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.067351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.067639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.067705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.067972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.068046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.068291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.068360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.068661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.068726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.068933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.069000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.069302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.069369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.069670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.069736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.069982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.070047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.070339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.070406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.070633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.070698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.070946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.071012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.071286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.071352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.071621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.071686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.071991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.504 [2024-10-30 12:38:13.072054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.504 qpair failed and we were unable to recover it. 00:26:40.504 [2024-10-30 12:38:13.072286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.072352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.072599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.072664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.072932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.072996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.073232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.073313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.073606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.073680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.073990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.074058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.074304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.074370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.074629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.074693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.074898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.074965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.075183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.075248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.075563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.075634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.075946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.076010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.076280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.076346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.076603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.076669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.076971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.077035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.077326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.077411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.077704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.077768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.078032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.078096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.078352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.078419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.078628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.078703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.078946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.079011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.079239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.079316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.079550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.079614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.079908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.079972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.080179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.080244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.080564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.080629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.080828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.080892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.081177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.081247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.081486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.081552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.081849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.081914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.082208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.082304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.082535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.505 [2024-10-30 12:38:13.082613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.505 qpair failed and we were unable to recover it. 00:26:40.505 [2024-10-30 12:38:13.082857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.082923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.083162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.083228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.083510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.083574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.083869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.083933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.084171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.084235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.084460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.084526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.084768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.084834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.085099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.085173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.085473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.085541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.085771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.085836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.086131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.086196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.086424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.086495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.086784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.086864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.087128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.087193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.087467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.087534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.087829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.087893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.088141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.088205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.088515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.088580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.088830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.088896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.089148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.089213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.089483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.089549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.089817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.089885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.090134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.090201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.090523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.090591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.090890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.090954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.091245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.091325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.506 qpair failed and we were unable to recover it. 00:26:40.506 [2024-10-30 12:38:13.091582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.506 [2024-10-30 12:38:13.091648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.091908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.091972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.092223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.092302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.092508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.092573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.092806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.092870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.093116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.093183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.093400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.093467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.093725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.093790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.094088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.094152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.094434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.094500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.094729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.094794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.095079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.095144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.095380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.095445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.095708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.095777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.096078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.096145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.096458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.096534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.096824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.096890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.097188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.097254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.097490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.097554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.097769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.097834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.098085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.098150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.098445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.098511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.098704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.098770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.099069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.099133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.099397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.099464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.099710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.099774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.100066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.100148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.100412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.100479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.100689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.100757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.101013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.101078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.101339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.101405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.101643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.507 [2024-10-30 12:38:13.101707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.507 qpair failed and we were unable to recover it. 00:26:40.507 [2024-10-30 12:38:13.101982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.102047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.102299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.102364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.102606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.102669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.102878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.102942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.103150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.103218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.103463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.103530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.103830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.103895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.104088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.104152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.104473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.104538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.104737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.104802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.105086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.105151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.105457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.105521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.105769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.105843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.106135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.106201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.106527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.106592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.106903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.106968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.107214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.107297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.107548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.107614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.107832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.107897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.108141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.108205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.108431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.108500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.108811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.108909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.109163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.109232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.109530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.109596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.109817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.109884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.110127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.110196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.110462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.110530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.110773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.110841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.111081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.111144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.111391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.111456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.111707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.508 [2024-10-30 12:38:13.111772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.508 qpair failed and we were unable to recover it. 00:26:40.508 [2024-10-30 12:38:13.112023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.112086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.112381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.112458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.112759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.112823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.113112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.113175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.113451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.113517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.113823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.113887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.114191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.114253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.114537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.114601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.114858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.114922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.115157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.115220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.115528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.115593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.115851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.115914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.116155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.116219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.116450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.116518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.116810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.116874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.117071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.117134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.117394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.117459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.117751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.117826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.118035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.118099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.118368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.118433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.118653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.118716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.118950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.119013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.119244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.119321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.119559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.119623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.119878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.119941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.120231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.120310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.120506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.120573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.120795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.120858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.121113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.121177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.121382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.121448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.121733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.121797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.509 [2024-10-30 12:38:13.122112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.509 [2024-10-30 12:38:13.122176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.509 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.122433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.122498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.122758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.122822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.123039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.123101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.123357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.123423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.123718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.123781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.124022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.124085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.124386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.124451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.124706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.124769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.124992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.125056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.125306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.125370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.125634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.125700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.125991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.126053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.126325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.126407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.126711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.126774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.127061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.127124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.127339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.127407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.127698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.127763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.128058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.128122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.128382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.128447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.128703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.128766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.129054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.129117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.129414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.129479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.129725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.129787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.130076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.130140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.130364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.130429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.130721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.130784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.131094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.131158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.131435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.131500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.131752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.131815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.132069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.132133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.132395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.510 [2024-10-30 12:38:13.132459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.510 qpair failed and we were unable to recover it. 00:26:40.510 [2024-10-30 12:38:13.132679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.132742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.133034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.133097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.133304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.133369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.133617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.133680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.133912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.133974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.134224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.134300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.134560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.134625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.134892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.134955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.135206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.135298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.135592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.135656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.135910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.135973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.136175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.136242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.136590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.136656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.136908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.136971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.137280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.137345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.137626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.137690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.137997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.138060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.138318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.138385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.138675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.138739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.138987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.139051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.511 [2024-10-30 12:38:13.139359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.511 [2024-10-30 12:38:13.139423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.511 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.139651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.139712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.139918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.139980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.140283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.140349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.140602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.140663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.140894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.140954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.141207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.141284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.141539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.141602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.141855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.141917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.142155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.142218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.142480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.142543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.142775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.142838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.143086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.143152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.143430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.143495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.143755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.143819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.144022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.144084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.144306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.144372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.144621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.144685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.144931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.144998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.145245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.145325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.145625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.145690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.145982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.146046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.146292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.146357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.146579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.146642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.146908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.146972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.147275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.147339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.147587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.147651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.147942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.148007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.148287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.148352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.148649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.148714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.148974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.149038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.149326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.149392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.149655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.149719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.150006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.799 [2024-10-30 12:38:13.150068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.799 qpair failed and we were unable to recover it. 00:26:40.799 [2024-10-30 12:38:13.150354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.150418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.150706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.150770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.151018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.151081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.151328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.151393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.151653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.151718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.152016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.152079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.152313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.152378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.152633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.152697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.152986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.153049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.153355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.153421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.153630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.153694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.153955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.154018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.154276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.154341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.154588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.154653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.154888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.154951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.155208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.155284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.155525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.155589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.155840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.155904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.156161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.156224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.156553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.156617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.156878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.156942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.157187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.157252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.157531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.157604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.157864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.157928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.158134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.158198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.158500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.158565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.158761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.158826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.159073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.159136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.159370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.159435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.159727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.159791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.160093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.160156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.160463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.160528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.160826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.160890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.161190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.161253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.161529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.161592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.161817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.161882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.162099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.162165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.162460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.162525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.162811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.162875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.163159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.163222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.800 [2024-10-30 12:38:13.163541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.800 [2024-10-30 12:38:13.163605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.800 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.163903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.163968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.164208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.164301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.164560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.164623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.164923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.164987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.165243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.165327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.165538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.165601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.165808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.165874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.166096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.166161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.166421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.166495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.166736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.166800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.167092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.167156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.167467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.167532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.167784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.167848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.168104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.168167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.168478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.168543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.168803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.168866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.169081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.169143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.169384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.169450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.169699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.169762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.170012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.170075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.170322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.170388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.170635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.170699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.170972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.171035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.171281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.171347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.171641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.171705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.171950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.172013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.172311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.172376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.172673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.172736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.173034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.173097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.173352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.173417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.173618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.173681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.173903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.173966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.174221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.174305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.174530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.174596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.174843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.174906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.175141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.175205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.175533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.175598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.175806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.175869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.176109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.176176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.801 [2024-10-30 12:38:13.176490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.801 [2024-10-30 12:38:13.176556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.801 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.176846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.176910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.177149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.177213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.177471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.177534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.177783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.177850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.178144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.178208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.178488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.178551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.178770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.178833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.179120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.179183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.179494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.179558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.179839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.179903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.180190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.180253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.180521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.180584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.180826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.180892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.181187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.181252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.181572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.181634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.181943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.182006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.182291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.182360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.182606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.182670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.182942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.183005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.183301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.183367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.183617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.183681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.183968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.184032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.184284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.184349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.184657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.184721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.184955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.185018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.185315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.185380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.185664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.185728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.185967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.186030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.186230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.186313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.186575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.186638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.186928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.186990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.187242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.187322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.187614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.187678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.187983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.188046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.188240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.188320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.188609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.188673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.802 [2024-10-30 12:38:13.188915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.802 [2024-10-30 12:38:13.188988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.802 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.189307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.189372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.189661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.189726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.190020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.190083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.190350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.190415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.190675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.190739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.191042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.191105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.191394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.191459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.191757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.191820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.192062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.192125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.192415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.192480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.192737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.192799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.193098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.193161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.193466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.193531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.193844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.193907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.194198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.194280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.194529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.194592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.194892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.194955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.195248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.195330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.195584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.195647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.195894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.195957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.196218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.196300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.196568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.196632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.196878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.196944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.197200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.197297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.197540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.197603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.197810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.197874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.198165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.198238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.198518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.198582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.198826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.198890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.199174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.199238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.199513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.199577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.199820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.199883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.200174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.200237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.200557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.803 [2024-10-30 12:38:13.200621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.803 qpair failed and we were unable to recover it. 00:26:40.803 [2024-10-30 12:38:13.200872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.200934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.201195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.201285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.201577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.201640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.201938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.202002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.202250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.202333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.202559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.202622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.202873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.202936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.203166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.203229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.203492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.203555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.203864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.203927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.204166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.204230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.204467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.204530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.204722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.204788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.205032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.205096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.205387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.205453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.205705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.205767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.206048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.206111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.206315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.206380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.206652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.206715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.207012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.207085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.207385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.207447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.207634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.207695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.207915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.207976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.208272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.208337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.208592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.208654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.208949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.209013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.209242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.209344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.209600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.209663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.209895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.209958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.210196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.210283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.210494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.210562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.210825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.210888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.211136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.211199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.211481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.211546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.211843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.211906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.212192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.212273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.212518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.804 [2024-10-30 12:38:13.212581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.804 qpair failed and we were unable to recover it. 00:26:40.804 [2024-10-30 12:38:13.212836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.212899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.213137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.213201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.213493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.213557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.213858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.213922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.214193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.214276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.214584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.214649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.214906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.214970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.215220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.215311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.215567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.215631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.215878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.215941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.216163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.216226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.216504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.216570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.216834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.216898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.217200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.217282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.217483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.217555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.217807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.217871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.218082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.218144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.218460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.218524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.218814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.218878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.219125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.219187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.219514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.219587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.219800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.219863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.220114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.220178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.220420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.220486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.220733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.220802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.221098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.221162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.221452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.221518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.221774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.221837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.222127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.222190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.222515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.222586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.222886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.222949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.223200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.223280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.223490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.223554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.223850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.223913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.224220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.224303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.224554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.224619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.224925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.224989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.225323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.225358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.225499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.225533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.225642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.225675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.225785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.805 [2024-10-30 12:38:13.225818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.805 qpair failed and we were unable to recover it. 00:26:40.805 [2024-10-30 12:38:13.225950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.225984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.226130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.226164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.226299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.226332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.226472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.226505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.226771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.226834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.227083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.227147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.227476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.227541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.227838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.227903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.228201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.228282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.228529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.228602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.228880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.228946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.229239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.229326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.229622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.229686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.229989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.230054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.230349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.230416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.230659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.230733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.230988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.231050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.231354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.231420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.231619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.231683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.231979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.232042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.232326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.232392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.232704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.232769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.232994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.233057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.233365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.233432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.233689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.233753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.234046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.234110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.234366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.234433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.234694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.234759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.234979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.235042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.235307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.235372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.235668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.235737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.235954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.236020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.236281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.236345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.236607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.236670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.236922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.236986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.237240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.237321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.237633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.237708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.238014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.238079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.238380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.238446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.238711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.238773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.806 [2024-10-30 12:38:13.239017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.806 [2024-10-30 12:38:13.239081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.806 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.239342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.239407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.239672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.239745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.240046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.240109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.240408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.240473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.240772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.240835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.241126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.241189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.241445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.241512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.241745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.241811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.242115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.242178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.242479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.242544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.242791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.242856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.243149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.243213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.243553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.243619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.243926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.243990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.244246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.244331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.244619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.244684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.244944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.245007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.245277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.245342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.245590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.245654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.245854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.245918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.246931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.246966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.247076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.247104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.247252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.247311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.247420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.247457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.247651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.247700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.247852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.247901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.247999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.248026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.248165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.248194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.248354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.248414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.248555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.248609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.248836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.248886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.249006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.249034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.249183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.249211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.249409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.249459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.249649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.249701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.249860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.249914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.250042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.250070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.250188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.807 [2024-10-30 12:38:13.250216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.807 qpair failed and we were unable to recover it. 00:26:40.807 [2024-10-30 12:38:13.250357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.250385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.250492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.250520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.250616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.250650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.250780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.250808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.250927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.250955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.251076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.251104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.251242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.251297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.251393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.251419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.251523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.251552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.251714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.251743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.251903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.251931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.252058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.252087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.252188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.252217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.252352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.252380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.252494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.252542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.252647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.252675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.252771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.252799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.252918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.252946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.253058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.253086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.253180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.253207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.253356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.253385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.254143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.254177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.254320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.254350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.254503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.254531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.254667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.254695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.254820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.254853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.255013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.255041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.255139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.255166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.255300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.255329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.255420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.255447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.256183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.256216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.256369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.256400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.256520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.256548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.256647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.256675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.256769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.256800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.256897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.256924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.257054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.257083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.257182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.808 [2024-10-30 12:38:13.257209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.808 qpair failed and we were unable to recover it. 00:26:40.808 [2024-10-30 12:38:13.257327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.257356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.257497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.257525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.257634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.257663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.257792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.257820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.257940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.257968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.258090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.258119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.258219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.258246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.258368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.258398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.258497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.258524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.258652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.258680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.258777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.258805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.258901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.258929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.259031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.259058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.259183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.259211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.259341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.259373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.259541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.259569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.259696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.259723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.259873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.259901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.259989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.260017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.260142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.260169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.260304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.260333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.260433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.260461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.260620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.260647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.260746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.260775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.260904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.260939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.261068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.261095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.261197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.261225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.261332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.261360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.261460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.261488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.261652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.261679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.261815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.261842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.261938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.261965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.262060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.262089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.262212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.262239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.262367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.262396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.809 [2024-10-30 12:38:13.262515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.809 [2024-10-30 12:38:13.262542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.809 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.262669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.262697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.262830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.262857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.262985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.263013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.263114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.263143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.263308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.263337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.263440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.263473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.263606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.263634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.263731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.263759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.263865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.263894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.264050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.264078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.264207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.264252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.264364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.264396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.264549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.264578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.264694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.264728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.264933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.265004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.265114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.265148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.265300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.265330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.265433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.265463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.265637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.265691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.265873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.265910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.266122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.266157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.266317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.266348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.266442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.266471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.266602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.266634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.266762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.266791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.266945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.266984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.267150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.267189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.267357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.267387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.267509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.267558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.267676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.267711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.267852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.267887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.268129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.268168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.268346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.268382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.268486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.268516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.268718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.268770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.268943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.268994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.269128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.269165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.269300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.269331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.810 [2024-10-30 12:38:13.269431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.810 [2024-10-30 12:38:13.269460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.810 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.269602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.269638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.269810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.269872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.270045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.270079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.270215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.270249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.270373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.270403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.270491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.270520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.270647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.270680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.270929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.270963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.271108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.271142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.271269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.271325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.271425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.271454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.271556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.271603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.271700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.271734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.271837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.271871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.271982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.272016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.272197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.272231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.272369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.272399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.272548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.272593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.272712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.272747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.272856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.272907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.273028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.273062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.273171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.273206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.273370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.273400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.273491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.273521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.273658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.273693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.273885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.273918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.274084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.274119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.274250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.274320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.274459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.274490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.274665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.274700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.274925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.274983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.275204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.275239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.275422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.275451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.275563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.275606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.275729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.275778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.275948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.276007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.276155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.276203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.276318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.276348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.276547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.276605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.276798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.811 [2024-10-30 12:38:13.276857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.811 qpair failed and we were unable to recover it. 00:26:40.811 [2024-10-30 12:38:13.277062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.277101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.277309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.277356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.277457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.277486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.277683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.277711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.277858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.277892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.278058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.278099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.278264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.278295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.278399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.278428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.278593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.278627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.278823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.278852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.278956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.279006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.279227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.279286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.279377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.279407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.279536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.279565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.279676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.279710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.279879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.279913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.280025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.280059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.280240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.280282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.280425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.280454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.280556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.280585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.280726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.280755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.280970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.281010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.281231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.281307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.281506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.281535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.281764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.281798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.281915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.281951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.282157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.282209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.282359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.282390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.282591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.282625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.282849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.282883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.283015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.283062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.283191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.283230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.283375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.283408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.283511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.283546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.283658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.812 [2024-10-30 12:38:13.283687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.812 qpair failed and we were unable to recover it. 00:26:40.812 [2024-10-30 12:38:13.283920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.283954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.284066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.284101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.284203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.284237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.284379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.284410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.284506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.284535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.284662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.284690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.284858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.284892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.285096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.285150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.285313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.285342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.285433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.285462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.285592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.285622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.285804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.285837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.286023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.286058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.286239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.286278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.286428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.286456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.286591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.286625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.286805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.286834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.286984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.287023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.287210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.287246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.287380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.287409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.287511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.287540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.287790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.287824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.287960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.288030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.288190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.288225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.288394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.288425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.288553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.288594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.288683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.288713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.288836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.288872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.289090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.289130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.289317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.289346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.289447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.289477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.289579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.289609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.289765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.289794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.289913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.289950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.290125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.290159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.290282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.290312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.290435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.290465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.290587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.290616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.290796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.290863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.291000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.291034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.813 qpair failed and we were unable to recover it. 00:26:40.813 [2024-10-30 12:38:13.291144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.813 [2024-10-30 12:38:13.291179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.291306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.291342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.291480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.291514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.291625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.291661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.291815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.291849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.291963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.291997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.292137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.292171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.292339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.292374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.292486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.292520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.292703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.292737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.292880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.292923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.293058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.293092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.293211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.293245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.293404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.293439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.293674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.293709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.293855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.293889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.294012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.294047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.294214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.294248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.294369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.294403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.294544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.294580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.294701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.294735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.294881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.294915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.295052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.295086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.295229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.295282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.295395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.295430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.295613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.295648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.295760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.295795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.295907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.295942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.296114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.296155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.296315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.296350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.296462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.296497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.296670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.296704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.296849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.296884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.297020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.297054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.297168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.297202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.297356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.297391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.297502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.297536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.297676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.297711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.297892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.297940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.298051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.298086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.814 [2024-10-30 12:38:13.298220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.814 [2024-10-30 12:38:13.298254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.814 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.298446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.298481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.298595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.298639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.298744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.298779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.298950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.298984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.299164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.299198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.299322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.299357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.299496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.299531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.299683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.299717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.299896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.299930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.300098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.300133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.300272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.300307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.300491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.300526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.300669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.300704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.300847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.300881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.301019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.301054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.301223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.301267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.301375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.301409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.301555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.301595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.301776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.301811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.301944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.301978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.302150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.302185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.302335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.302371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.302511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.302546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.302655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.302689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.302802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.302836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.303004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.303038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.303198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.303253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.303424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.303461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.303584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.303620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.303808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.303842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.304014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.304047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.304305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.304336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.304437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.304467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.304596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.304627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.304751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.304781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.304898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.304944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.305152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.305185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.305345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.305383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.305548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.305578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.815 [2024-10-30 12:38:13.305755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.815 [2024-10-30 12:38:13.305813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.815 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.306087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.306153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.306361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.306392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.306493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.306523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.306802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.306857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.307075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.307125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.307322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.307352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.307482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.307512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.307749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.307804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.308027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.308096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.308330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.308360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.308534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.308598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.308809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.308843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.308993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.309026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.309208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.309238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.309351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.309381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.309479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.309509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.309710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.309744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.309883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.309917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.310031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.310066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.310188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.310223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.310378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.310409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.310510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.310539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.310707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.310737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.310873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.310903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.311066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.311127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.311303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.311337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.311441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.311473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.311573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.311602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.311722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.311771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.311899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.311930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.312065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.312095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.312196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.312226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.312344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.312375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.312508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.312538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.312703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.312732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.313517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.816 [2024-10-30 12:38:13.313559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.816 qpair failed and we were unable to recover it. 00:26:40.816 [2024-10-30 12:38:13.313699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.313730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.313873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.313904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.314066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.314100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.314236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.314295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.314430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.314459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.314621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.314651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.314780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.314828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.314964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.315003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.315129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.315158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.315283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.315314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.315465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.315495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.315609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.315638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.315770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.315800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.315929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.315964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.316072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.316102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.316249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.316311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.316459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.316487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.316610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.316638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.316728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.316755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.316852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.316878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.317025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.317055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.317165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.317208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.317342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.317370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.317456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.317482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.317577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.317603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.317750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.317776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.317933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.317977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.318088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.318122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.318230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.318266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.318387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.318414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.318506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.318532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.318616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.318643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.318731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.318757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.318848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.318880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.319032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.319077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.319193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.319225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.319356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.319388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.319484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.319511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.319607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.319633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.319734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.319761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.817 qpair failed and we were unable to recover it. 00:26:40.817 [2024-10-30 12:38:13.319864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.817 [2024-10-30 12:38:13.319894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.319993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.320024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.320132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.320177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.320307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.320334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.320424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.320450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.320541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.320578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.320698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.320745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.320909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.320958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.321094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.321134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.321250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.321297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.321426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.321453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.321538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.321563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.321673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.321717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.321816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.321849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.321947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.321978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.322141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.322178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.322328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.322355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.322444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.322471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.322563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.322590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.322714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.322740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.322826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.322853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.322989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.323029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.323190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.323236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.323381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.323410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.323534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.323568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.323667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.323699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.323840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.323871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.323995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.324041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.324143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.324190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.324306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.324346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.324438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.324466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.324547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.324579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.324667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.324692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.324836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.324864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.325002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.325038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.325165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.325193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.325295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.325321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.325412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.325440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.325574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.325614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.325765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.325796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.818 [2024-10-30 12:38:13.325899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.818 [2024-10-30 12:38:13.325928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.818 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.326093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.326121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.326267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.326314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.326406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.326433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.326552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.326583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.326685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.326716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.326873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.326905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.327021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.327066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.327163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.327208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.327311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.327338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.327416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.327443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.327525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.327551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.327669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.327695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.327806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.327833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.327951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.327995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.328132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.328178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.328294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.328323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.328427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.328466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.328568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.328600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.328724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.328752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.328854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.328885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.329034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.329060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.329201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.329233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.329349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.329376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.329456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.329483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.329579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.329606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.329696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.329734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.329895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.329925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.330030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.330061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.330164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.330195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.330340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.330366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.330461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.330486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.330592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.330627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.330736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.330762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.330860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.330890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.331002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.331033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.331138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.331182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.331273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.331300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.331391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.331417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.331508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.331534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.331699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.819 [2024-10-30 12:38:13.331728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.819 qpair failed and we were unable to recover it. 00:26:40.819 [2024-10-30 12:38:13.331852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.331876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.331976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.332023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.332121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.332147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.332249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.332281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.332361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.332386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.332463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.332488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.332600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.332627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.332747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.332775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.332866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.332893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.332984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.333016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.333099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.333130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.333282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.333321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.333427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.333453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.333539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.333564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.333678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.333703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.333788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.333813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.333891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.333917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.334065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.334093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.334220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.334248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.334342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.334370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.334472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.334498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.334588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.334615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.334692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.334716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.334808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.334833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.334962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.334988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.335101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.335128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.335218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.335245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.335340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.335366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.335465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.335491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.335609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.335634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.335735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.335761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.335871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.335900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.336016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.336044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.336137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.336163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.336251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.336302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.336386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.336414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.336493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.336518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.336669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.336695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.336817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.336843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.336993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.337019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.820 qpair failed and we were unable to recover it. 00:26:40.820 [2024-10-30 12:38:13.337102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.820 [2024-10-30 12:38:13.337127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.337218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.337248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.337338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.337363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.337456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.337482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.337603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.337627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.337741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.337766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.337882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.337907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.337993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.338018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.338111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.338136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.338302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.338334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.338420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.338447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.338539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.338565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.338684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.338711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.338792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.338828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.338910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.338937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.339048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.339075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.339167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.339194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.339312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.339340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.339429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.339455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.339571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.339596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.339688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.339715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.339805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.339831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.339971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.339997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.340098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.340137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.340271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.340299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.340386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.340411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.340494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.340519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.340617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.340642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.340753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.340783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.340873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.340899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.340995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.341027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.341157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.341184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.341279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.341307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.341388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.341414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.341502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.341528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.341615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.341641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.341730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.821 [2024-10-30 12:38:13.341757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.821 qpair failed and we were unable to recover it. 00:26:40.821 [2024-10-30 12:38:13.341900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.341926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.342015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.342042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.342127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.342155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.342289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.342328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.342430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.342458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.342615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.342642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.342755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.342781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.342901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.342927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.343053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.343080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.343166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.343191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.343283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.343316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.343404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.343429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.343525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.343564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.343659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.343686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.343802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.343840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.343931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.343959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.344079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.344105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.344202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.344228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.344329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.344357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.344454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.344492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.344621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.344649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.344764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.344790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.344927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.344953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.345066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.345092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.345177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.345203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.345329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.345357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.345442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.345471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.345598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.345627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.345746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.345774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.345889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.345915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.346004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.346030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.346114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.346151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.346297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.346323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.346406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.346433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.346512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.346538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.346629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.346655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.346781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.346808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.346920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.346947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.347030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.347057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.347142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.347169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.822 qpair failed and we were unable to recover it. 00:26:40.822 [2024-10-30 12:38:13.347286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.822 [2024-10-30 12:38:13.347313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.347416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.347455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.347587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.347625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.347749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.347777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.347923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.347949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.348076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.348103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.348218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.348245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.348388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.348416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.348497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.348523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.348605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.348631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.348718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.348745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.348831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.348858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.348948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.348978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.349093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.349119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.349272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.349303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.349426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.349453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.349574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.349600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.349689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.349716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.349832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.349860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.349983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.350012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.350104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.350131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.350220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.350247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.350343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.350369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.350496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.350523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.350642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.350669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.350768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.350795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.350883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.350910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.350997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.351024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.351106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.351133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.351285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.351311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.351400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.351425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.351541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.351577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.351707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.351732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.351817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.351843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.351965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.352005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.352102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.352128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.352217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.352246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.352345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.352370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.352460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.352486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.352567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.352592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.823 [2024-10-30 12:38:13.352711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.823 [2024-10-30 12:38:13.352736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.823 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.352818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.352852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.352928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.352953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.353081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.353110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.353232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.353312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.353411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.353439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.353525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.353570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.353747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.353800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.353905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.353939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.354053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.354083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.354179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.354208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.354332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.354359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.354476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.354503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.354629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.354668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.354807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.354839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.354943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.354973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.355121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.355151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.355262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.355310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.355426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.355457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.355551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.355595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.355687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.355715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.355863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.355892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.356016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.356044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.356177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.356206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.356369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.356409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.356508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.356536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.356707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.356746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.356898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.356928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.357023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.357052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.357134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.357163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.357279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.357306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.357396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.357422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.357559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.357590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.357689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.357729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.357850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.357880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.357980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.358010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.358121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.358149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.358241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.358282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.358416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.358443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.358532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.358582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.358766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.824 [2024-10-30 12:38:13.358795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.824 qpair failed and we were unable to recover it. 00:26:40.824 [2024-10-30 12:38:13.358939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.358970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.359072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.359100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.359245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.359280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.359377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.359403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.359527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.359553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.359739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.359781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.359926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.359952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.360083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.360128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.360267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.360317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.360410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.360436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.360548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.360573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.360722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.360748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.360859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.360884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.360967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.361011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.361105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.361133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.361273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.361317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.361393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.361418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.361529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.361585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.361720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.361750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.361872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.361915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.362038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.362067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.362172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.362199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.362344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.362370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.362484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.362509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.362618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.362645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.362763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.362792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.362893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.362922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.363064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.363124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.363272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.363319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.363438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.363464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.363589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.363619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.363753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.363782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.363883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.363912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.364004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.364033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.364128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.364156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.364272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.364302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.825 qpair failed and we were unable to recover it. 00:26:40.825 [2024-10-30 12:38:13.364422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.825 [2024-10-30 12:38:13.364461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.364551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.364578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.364711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.364740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.364834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.364863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.364979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.365009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.365171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.365200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.365327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.365353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.365463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.365488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.365670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.365717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.365826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.365851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.365951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.365983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.366104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.366133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.366285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.366331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.366420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.366446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.366530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.366562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.366682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.366708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.366837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.366882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.367008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.367037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.367209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.367234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.367380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.367404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.367518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.367543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.367735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.367780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.367917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.367946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.368075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.368119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.368231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.368272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.368357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.368383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.368468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.368494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.368623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.368649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.368754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.368783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.368893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.368923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.369089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.369115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.369226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.369251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.369339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.369364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.369446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.369483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.369611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.369640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.369780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.369810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.369953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.369984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.370160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.370189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.370303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.370332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.370465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.370493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.370615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.826 [2024-10-30 12:38:13.370660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.826 qpair failed and we were unable to recover it. 00:26:40.826 [2024-10-30 12:38:13.370793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.370824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.370954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.370985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.371110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.371156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.371284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.371329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.371452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.371480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.371633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.371663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.371824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.371856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.372011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.372047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.372184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.372215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.372368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.372399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.372495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.372525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.372633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.372663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.372820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.372849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.373005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.373035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.373137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.373166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.373289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.373321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.373475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.373504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.373628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.373656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.373798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.373844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.373983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.374028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.374150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.374179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.374332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.374361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.374458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.374485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.374606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.374635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.374757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.374802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.374923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.374951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.375175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.375204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.375319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.375350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.375457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.375486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.375609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.375639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.375783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.375813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.375921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.375949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.376066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.376096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.376236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.376303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.376403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.376437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.376601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.376629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.376784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.376818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.376955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.376999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.377232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.377269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.377395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.377424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.827 qpair failed and we were unable to recover it. 00:26:40.827 [2024-10-30 12:38:13.377567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.827 [2024-10-30 12:38:13.377597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.377730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.377759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.377897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.377925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.378061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.378106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.378211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.378242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.378359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.378389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.378540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.378569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.378659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.378688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.378816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.378847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.378970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.379017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.379164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.379196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.379350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.379383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.379508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.379537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.379656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.379701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.379824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.379869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.380007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.380052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.380148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.380178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.380290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.380320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.380442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.380470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.380596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.380626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.380753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.380782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.380915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.380948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.381109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.381139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.381272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.381320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.381430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.381461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.381607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.381657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.381820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.381869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.382040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.382069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.382163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.382192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.382323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.382353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.382464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.382493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.382649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.382700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.382844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.382893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.382996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.383027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.383185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.383216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.383390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.383421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.383507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.383536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.383689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.383718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.383825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.383854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.383988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.384019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.828 [2024-10-30 12:38:13.384113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.828 [2024-10-30 12:38:13.384143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.828 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.384298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.384329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.384452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.384482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.384622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.384652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.384782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.384811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.384934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.384963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.385061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.385090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.385228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.385265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.385399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.385428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.385574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.385604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.385733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.385762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.385943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.385993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.386119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.386164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.386322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.386351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.386451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.386481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.386595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.386640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.386765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.386810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.386912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.386941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.387066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.387095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.387249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.387284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.387411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.387439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.387538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.387571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.387706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.387735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.387863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.387892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.388025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.388053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.388179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.388208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.388374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.388404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.388502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.388530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.388650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.388680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.388810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.388838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.388967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.388996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.389127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.389155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.389280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.389309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.389444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.389474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.389570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.389599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.389741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.389771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.389900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.829 [2024-10-30 12:38:13.389929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.829 qpair failed and we were unable to recover it. 00:26:40.829 [2024-10-30 12:38:13.390031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.390060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.390208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.390237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.390374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.390402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.390554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.390583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.390707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.390736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.390888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.390916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.391043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.391072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.391197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.391226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.391340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.391369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.391494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.391523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.391675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.391704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.391804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.391832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.391969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.391998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.392157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.392185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.392315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.392344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.392501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.392530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.392684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.392733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.392885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.392913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.393043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.393072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.393198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.393226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.393354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.393401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.393528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.393573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.393695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.393723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.393851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.393880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.393981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.394010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.394121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.394150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.394281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.394310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.394440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.394468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.394625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.394654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.394751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.394779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.394898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.394927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.395027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.395055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.395180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.395208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.395328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.395359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.395467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.395496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.395631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.395661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.395791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.395820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.395951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.395979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.396134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.396163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.396301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.396330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.396459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.830 [2024-10-30 12:38:13.396488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.830 qpair failed and we were unable to recover it. 00:26:40.830 [2024-10-30 12:38:13.396602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.396630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.396761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.396789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.396916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.396944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.397071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.397099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.397233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.397267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.397396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.397426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.397549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.397579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.397706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.397736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.397840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.397870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.398003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.398032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.398168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.398197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.398313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.398348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.398449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.398478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.398568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.398597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.398738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.398767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.398927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.398956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.399053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.399082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.399178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.399207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.399337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.399367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.399473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.399502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.399605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.399634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.399738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.399767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.399896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.399926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.400058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.400087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.400215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.400244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.400388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.400419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.400551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.400580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.400677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.400706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.400860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.400890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.401012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.401041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.401165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.401194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.401313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.401344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.401442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.401472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.401587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.401616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.401744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.401773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.401902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.401931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.402055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.402084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.402209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.402239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.402345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.402380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.402475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.402504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.831 [2024-10-30 12:38:13.402594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.831 [2024-10-30 12:38:13.402623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.831 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.402781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.402810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.402935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.402964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.403060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.403089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.403178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.403207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.403345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.403375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.403503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.403532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.403663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.403692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.403826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.403855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.403998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.404027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.404154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.404184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.404342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.404392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.404527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.404557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.404711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.404740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.404863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.404892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.405015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.405044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.405170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.405199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.405327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.405356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.405461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.405491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.405587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.405616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.405744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.405773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.405908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.405937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.406090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.406120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.406224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.406253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.406389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.406418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.406504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.406533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.406660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.406689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.406812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.406841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.406935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.406964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.407094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.407123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.407250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.407286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.407382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.407412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.407528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.407557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.407660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.407689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.407812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.407841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.407972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.408000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.408130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.408159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.408300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.408332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.408448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.408478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.408574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.408604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.408724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.408753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.832 qpair failed and we were unable to recover it. 00:26:40.832 [2024-10-30 12:38:13.408869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.832 [2024-10-30 12:38:13.408898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.409026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.409055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.409186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.409215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.409320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.409350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.409475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.409504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.409633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.409662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.409789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.409819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.409910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.409939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.410056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.410085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.410266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.410312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.410422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.410454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.410556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.410586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.410725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.410756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.410849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.410880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.411031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.411062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.411194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.411224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.411355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.411386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.411519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.411562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.411726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.411768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.411970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.412011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.412179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.412221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.412385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.412417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.412566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.412617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.412742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.412789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.412974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.413020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.413143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.413173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.413279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.413309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.413409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.413438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.413563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.413592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.413689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.413718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.413850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.413879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.414000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.414029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.414122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.414151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.414306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.414336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.414431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.414460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.414593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.414622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.414726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.833 [2024-10-30 12:38:13.414755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.833 qpair failed and we were unable to recover it. 00:26:40.833 [2024-10-30 12:38:13.414871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.414900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.414997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.415026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.415149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.415179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.415280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.415311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.415420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.415450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.415574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.415603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.415711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.415740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.415848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.415877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.415986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.416015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.416119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.416148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.416276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.416306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.416426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.416455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.416585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.416614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.416768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.416797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.416957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.416986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.417092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.417122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.417207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.417236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.417368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.417398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.417504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.417534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.417671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.417700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.417858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.417887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.418012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.418042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.418171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.418200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.418298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.418328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.418486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.418515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.418674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.418703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.418802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.418831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.418960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.418989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.419107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.419137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.419287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.419318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.419444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.419475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.419603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.419632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.419724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.419754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.419879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.419909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.419996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.420025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.420138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.420167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.420285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.420331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.420476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.420509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.420666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.420698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.420827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.420857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.834 qpair failed and we were unable to recover it. 00:26:40.834 [2024-10-30 12:38:13.420957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.834 [2024-10-30 12:38:13.420989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.421085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.421116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.421251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.421291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.421391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.421422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.421585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.421626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.421825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.421868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.422042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.422083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.422247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.422318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.422492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.422551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.422672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.422724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.422902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.422934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.423053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.423082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.423171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.423200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.423372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.423406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.423584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.423632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.423744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.423796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.423933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.423962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.424064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.424094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.424228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.424265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.424398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.424427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.424551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.424580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.424711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.424741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.424837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.424867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.424970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.425000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.425105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.425135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.425234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.425270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.425364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.425394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.425521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.425550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.425680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.425709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.425863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.425897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.426030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.426059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.426185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.426215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.426350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.426380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.426512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.426541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.426638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.426667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.426761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.426792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.426921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.426950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.427109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.427138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.427283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.427313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.427437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.427468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.427619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.835 [2024-10-30 12:38:13.427649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.835 qpair failed and we were unable to recover it. 00:26:40.835 [2024-10-30 12:38:13.427781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.427810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.427910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.427939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.428071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.428101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.428265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.428294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.428391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.428420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.428540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.428570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.428700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.428729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.428861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.428890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.428974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.429004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.429132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.429161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.429315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.429345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.429464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.429493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.429620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.429649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.429743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.429772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.429870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.429898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.430028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.430062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.430160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.430189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.430295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.430324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.430423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.430452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.430586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.430615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.430708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.430737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.430869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.430898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.430993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.431023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.431178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.431207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.431342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.431372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.431477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.431506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.431598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.431627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.431743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.431772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.431928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.431958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.432126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.432155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.432283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.432313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.432439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.432470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.432573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.432603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.432704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.432733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.432819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.432848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.433006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.433035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.433143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.433172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.433303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.433333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.433496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.433525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.433654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.433683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.836 [2024-10-30 12:38:13.433803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.836 [2024-10-30 12:38:13.433832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.836 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.433924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.433953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.434109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.434143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.434268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.434299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.434409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.434439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.434558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.434587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.434748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.434777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.434875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.434905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.435014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.435043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.435174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.435203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.435357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.435386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.435507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.435536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.435666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.435696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.435794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.435823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.435977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.436006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.436107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.436136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.436299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.436329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.436431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.436460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.436583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.436612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.436703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.436731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.436853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.436882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.436981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.437010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.437135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.437165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.437319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.437349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.437482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.437512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.437639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.437669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.437803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.437832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.437933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.437962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.438094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.438123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.438232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.438267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.438394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.438423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.438546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.438576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.438719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.438748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.438836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.438865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.438997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.439026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.439125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.439154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.439315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.439344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.439475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.439504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.439636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.439666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.439787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.439816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.439934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.439964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.837 qpair failed and we were unable to recover it. 00:26:40.837 [2024-10-30 12:38:13.440092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.837 [2024-10-30 12:38:13.440122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.440219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.440248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.440389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.440418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.440573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.440602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.440730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.440759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.440858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.440887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.441044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.441073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.441225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.441270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.441400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.441430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.441527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.441556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.441649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.441678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.441806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.441835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.441940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.441969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.442061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.442090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.442190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.442220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.442360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.442390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.442517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.442546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.442680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.442710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.442845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.442874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.443001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.443030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.443126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.443156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.443307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.443336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.443464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.443493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.443626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.443655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.443778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.443808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.443931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.443960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.444096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.444125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.444217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.444246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.444351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.444380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.444536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.444569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.444695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.444724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.444859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.444888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.444977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.445006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.445099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.445128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.445236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.445271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.445427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.445456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.838 qpair failed and we were unable to recover it. 00:26:40.838 [2024-10-30 12:38:13.445584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.838 [2024-10-30 12:38:13.445613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.445744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.445774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.445905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.445934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.446028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.446058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.446178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.446208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.446305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.446334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.446464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.446494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.446651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.446681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.446810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.446839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.446993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.447022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.447151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.447180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.447286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.447317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.447448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.447477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.447581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.447611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.447706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.447736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.447867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.447896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.448014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.448043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.448147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.448180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.448322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.448352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.448446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.448475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.448564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.448599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.448718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.448747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.448902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.448931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.449037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.449066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.449160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.449189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.449299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.449329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.449467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.449496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.449598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.449627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.449754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.449784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.449873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.449902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.450033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.450061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.450198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.450227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.450368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.450420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.450597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.450626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.450783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.450812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.450916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.450945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.451102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.451131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.451262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.451293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.451464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.451493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.451597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.451626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.451729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.839 [2024-10-30 12:38:13.451759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.839 qpair failed and we were unable to recover it. 00:26:40.839 [2024-10-30 12:38:13.451918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.840 [2024-10-30 12:38:13.451947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.840 qpair failed and we were unable to recover it. 00:26:40.840 [2024-10-30 12:38:13.452036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.840 [2024-10-30 12:38:13.452065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.840 qpair failed and we were unable to recover it. 00:26:40.840 [2024-10-30 12:38:13.452188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.840 [2024-10-30 12:38:13.452218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.840 qpair failed and we were unable to recover it. 00:26:40.840 [2024-10-30 12:38:13.452366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.840 [2024-10-30 12:38:13.452413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.840 qpair failed and we were unable to recover it. 00:26:40.840 [2024-10-30 12:38:13.452545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.840 [2024-10-30 12:38:13.452574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.840 qpair failed and we were unable to recover it. 00:26:40.840 [2024-10-30 12:38:13.452727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.840 [2024-10-30 12:38:13.452756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.840 qpair failed and we were unable to recover it. 00:26:40.840 [2024-10-30 12:38:13.452855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.840 [2024-10-30 12:38:13.452890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.840 qpair failed and we were unable to recover it. 00:26:40.840 [2024-10-30 12:38:13.453023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.840 [2024-10-30 12:38:13.453053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.840 qpair failed and we were unable to recover it. 00:26:40.840 [2024-10-30 12:38:13.453144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.840 [2024-10-30 12:38:13.453173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.840 qpair failed and we were unable to recover it. 00:26:40.840 [2024-10-30 12:38:13.453273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.840 [2024-10-30 12:38:13.453302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:40.840 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.453426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.453472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.453657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.453708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.453828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.453856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.453956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.453984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.454086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.454114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.454205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.454242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.454371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.454401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.454513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.454542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.454650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.454679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.454779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.454808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.454926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.454971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.455094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.455127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.455172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b72f30 (9): Bad file descriptor 00:26:41.145 [2024-10-30 12:38:13.455328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.455373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.455496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.455526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.455626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.455656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.455789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.455819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.455951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.455986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.456134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.456164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.456321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.456384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.456543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.456593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.456747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.456797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.456923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.456953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.457055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.457086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.457231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.457285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.457407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.457452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.457609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.457655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.457753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.457785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.457998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.145 [2024-10-30 12:38:13.458028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.145 qpair failed and we were unable to recover it. 00:26:41.145 [2024-10-30 12:38:13.458153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.458183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.458308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.458339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.458560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.458601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.458732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.458773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.458909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.458961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.459132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.459175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.459399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.459433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.459555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.459598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.459760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.459809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.459978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.460019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.460215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.460246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.460420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.460456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.460622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.460665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.460794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.460848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.460980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.461061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.461267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.461325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.461427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.461457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.461667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.461708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.461867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.461908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.462084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.462148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.462341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.462372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.462466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.462497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.462653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.462705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.462890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.462955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.463107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.463151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.463341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.463372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.463471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.463502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.463644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.463690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.463872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.463913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.464030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.464071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.464197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.464238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.464432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.464477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.464607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.464660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.464805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.146 [2024-10-30 12:38:13.464856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.146 qpair failed and we were unable to recover it. 00:26:41.146 [2024-10-30 12:38:13.465001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.465052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.465159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.465193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.465299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.465332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.465456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.465486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.465614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.465663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.465829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.465859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.465990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.466023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.466153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.466183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.466315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.466345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.466475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.466505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.466825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.466891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.467103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.467170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.467347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.467378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.467512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.467542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.467667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.467730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.467930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.467972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.468107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.468164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.468320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.468351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.468484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.468514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.468642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.468673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.468816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.468858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.469074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.469118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.469262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.469293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.469426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.469456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.469604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.469647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.469839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.469882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.470093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.470158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.470330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.470361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.470520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.470550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.470650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.470701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.470876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.470919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.471111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.471154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.147 qpair failed and we were unable to recover it. 00:26:41.147 [2024-10-30 12:38:13.471362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.147 [2024-10-30 12:38:13.471396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.471527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.471557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.471677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.471707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.471869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.471899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.471998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.472028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.472192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.472292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.472424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.472456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.472552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.472632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.472852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.472918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.473142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.473209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.473407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.473438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.473568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.473599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.473722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.473751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.473907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.473936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.474070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.474140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.474277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.474309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.474421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.474466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.474632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.474676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.474865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.474907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.475041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.475083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.475267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.475328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.475430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.475460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.475616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.475672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.475860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.475904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.476042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.476098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.476239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.476315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.476447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.476477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.476641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.476671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.476825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.476867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.477039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.477083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.477245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.477285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.477412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.477442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.148 [2024-10-30 12:38:13.477591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.148 [2024-10-30 12:38:13.477634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.148 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.477844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.477888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.478048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.478091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.478319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.478349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.478486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.478516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.478637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.478691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.478862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.478906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.479098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.479142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.479330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.479376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.479500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.479545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.479750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.479796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.480010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.480053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.480190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.480233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.480459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.480504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.480674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.480727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.480880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.480934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.481067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.481118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.481221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.481276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.481375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.481405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.481579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.481629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.481755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.481806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.481936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.481965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.482070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.482102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.482253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.482308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.482483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.482528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.482665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.482698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.482795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.482826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.482986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.483017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.483150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.483182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.483315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.483345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.483525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.149 [2024-10-30 12:38:13.483576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.149 qpair failed and we were unable to recover it. 00:26:41.149 [2024-10-30 12:38:13.483746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.483778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.483951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.484017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.484182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.484213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.484361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.484392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.484492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.484522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.484690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.484734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.484866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.484908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.485035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.485079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.485242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.485286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.485418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.485449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.485569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.485612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.485830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.485874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.486059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.486105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.486309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.486342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.486481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.486526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.486688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.486742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.486908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.486962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.487115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.487170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.487312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.487346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.487477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.487507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.487605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.487634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.487732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.487762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.487888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.487919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.488045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.488074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.488193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.150 [2024-10-30 12:38:13.488222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.150 qpair failed and we were unable to recover it. 00:26:41.150 [2024-10-30 12:38:13.488366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.488399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.488569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.488642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.488850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.488897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.489030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.489077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.489217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.489285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.489409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.489441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.489575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.489621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.489803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.489847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.490092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.490157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.490340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.490370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.490494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.490525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.490691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.490735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.490950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.490993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.491130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.491205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.491396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.491425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.491535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.491565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.491660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.491720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.491864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.491923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.492118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.492149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.492254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.492298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.492456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.492486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.492634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.492680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.492854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.492900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.493096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.493142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.493364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.493394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.493503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.493533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.493689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.493753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.493898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.493943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.494167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.494213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.494385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.494430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.494570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.494602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.151 qpair failed and we were unable to recover it. 00:26:41.151 [2024-10-30 12:38:13.494735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.151 [2024-10-30 12:38:13.494765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.494897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.494927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.495095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.495140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.495332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.495363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.495489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.495519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.495648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.495677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.495804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.495834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.495962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.495994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.496157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.496217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.496425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.496471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.496625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.496677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.496883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.496930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.497172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.497218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.497371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.497404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.497560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.497590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.497713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.497742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.497869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.497900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.498086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.498150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.498384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.498430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.498591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.498640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.498780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.498840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.499060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.499107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.499267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.499321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.499429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.499459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.499591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.499647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.499865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.499912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.500050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.500104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.500336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.500369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.500499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.500529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.500654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.500683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.500781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.500811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.152 [2024-10-30 12:38:13.500938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.152 [2024-10-30 12:38:13.500969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.152 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.501095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.501140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.501326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.501359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.501516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.501545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.501711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.501741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.501894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.501951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.502129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.502183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.502330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.502361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.502487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.502519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.502649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.502695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.502886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.502932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.503115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.503163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.503333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.503364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.503464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.503494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.503619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.503649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.503809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.503838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.504020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.504086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.504212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.504243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.504359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.504390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.504518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.504548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.504755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.504814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.504969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.505019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.505111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.505140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.505270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.505305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.505432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.505463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.505659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.505704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.505910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.505955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.506155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.506223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.506416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.506448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.506617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.506664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.506878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.506923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.507066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.507112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.153 qpair failed and we were unable to recover it. 00:26:41.153 [2024-10-30 12:38:13.507297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.153 [2024-10-30 12:38:13.507346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.507479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.507508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.507661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.507691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.507891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.507936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.508156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.508201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.508358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.508389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.508517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.508547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.508685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.508716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.508814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.508845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.508991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.509056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.509246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.509302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.509445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.509478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.509653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.509699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.509925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.509972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.510232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.510337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.510501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.510531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.510638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.510693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.510914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.510961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.511141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.511188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.511373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.511404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.511549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.511625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.511874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.511906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.512189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.512235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.512379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.512410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.512570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.512601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.512737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.512788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.512975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.513024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.513172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.513221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.513418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.513462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.513587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.513647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.154 [2024-10-30 12:38:13.513847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.154 [2024-10-30 12:38:13.513898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.154 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.514047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.514098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.514275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.514334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.514453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.514507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.514635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.514685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.514809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.514839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.514966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.514995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.515119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.515148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.515296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.515341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.515497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.515543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.515655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.515688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.515781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.515818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.515946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.515976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.516104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.516134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.516240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.516277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.516375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.516431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.516629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.516685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.516852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.516911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.517042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.517071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.517194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.517224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.517398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.517451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.517548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.517578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.517713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.517742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.517901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.517931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.518084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.518113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.518218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.518248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.518391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.518421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.518514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.518544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.518712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.518761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.518882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.518911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.519064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.519093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.519175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.155 [2024-10-30 12:38:13.519204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.155 qpair failed and we were unable to recover it. 00:26:41.155 [2024-10-30 12:38:13.519367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.519420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.519567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.519624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.519789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.519841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.520068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.520117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.520263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.520316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.520506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.520554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.520793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.520843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.521011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.521059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.521250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.521326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.521428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.521459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.521682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.521731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.521957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.522005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.522172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.522220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.522407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.522439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.522568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.522620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.522746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.522808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.523012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.523064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.523193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.523223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.523433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.523485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.523650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.523699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.523795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.523825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.523979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.524031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.524152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.524181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.524281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.524311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.524426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.524483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.524667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.524713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.524877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.524931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.525047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.525077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.525172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.156 [2024-10-30 12:38:13.525201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.156 qpair failed and we were unable to recover it. 00:26:41.156 [2024-10-30 12:38:13.525353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.525399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.525550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.525582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.525704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.525734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.525898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.525946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.526163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.526213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.526395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.526441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.526666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.526719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.526894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.526945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.527112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.527166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.527323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.527353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.527534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.527564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.527714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.527764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.527880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.527938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.528061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.528090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.528214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.528267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.528409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.528441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.528578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.528628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.528767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.528833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.529012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.529064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.529251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.529288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.529420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.529450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.529563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.529612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.529767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.529816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.530049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.530100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.530323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.530354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.530447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.530477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.530601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.530631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.530726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.530757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.530962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.531012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.531161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.531219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.531403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.531437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.157 qpair failed and we were unable to recover it. 00:26:41.157 [2024-10-30 12:38:13.531553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.157 [2024-10-30 12:38:13.531584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.531796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.531852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.532088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.532142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.532345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.532391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.532516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.532560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.532659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.532690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.532845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.532899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.533042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.533102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.533199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.533228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.533414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.533467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.533698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.533748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.533911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.533966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.534176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.534232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.534423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.534464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.534655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.534704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.534882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.534931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.535130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.535181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.535378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.535409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.535511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.535541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.535637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.535667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.535795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.535824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.535993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.536041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.536284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.536337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.536487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.536532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.536762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.536815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.537001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.537050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.537241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.537317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.537425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.537455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.537587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.537617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.537858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.537906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.538133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.158 [2024-10-30 12:38:13.538183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.158 qpair failed and we were unable to recover it. 00:26:41.158 [2024-10-30 12:38:13.538389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.538421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.538516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.538546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.538644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.538674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.538834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.538894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.539124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.539173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.539352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.539382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.539522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.539555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.539655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.539687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.539831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.539880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.540047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.540096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.540240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.540314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.540474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.540505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.540600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.540658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.540840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.540888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.541063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.541093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.541250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.541318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.541500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.541530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.541706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.541755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.541977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.542025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.542208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.542271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.542477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.542525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.542716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.542766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.542945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.543004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.543231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.543293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.543489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.543539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.543766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.543815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.544010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.544060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.544214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.544284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.544482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.544530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.544759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.544807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.544972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.159 [2024-10-30 12:38:13.545021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.159 qpair failed and we were unable to recover it. 00:26:41.159 [2024-10-30 12:38:13.545164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.545213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.545400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.545450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.545669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.545724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.545922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.545972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.546132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.546180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.546409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.546460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.546615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.546664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.546825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.546874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.547149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.547214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.547440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.547490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.547716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.547764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.547962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.548011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.548189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.548296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.548505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.548554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.548746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.548797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.548990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.549040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.549283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.549333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.549566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.549615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.549853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.549901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.550128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.550176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.550420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.550470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.550697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.550746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.550912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.550961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.551131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.551180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.551393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.551443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.551588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.551637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.551838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.551886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.552081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.552130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.552325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.552375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.552580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.552629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.160 qpair failed and we were unable to recover it. 00:26:41.160 [2024-10-30 12:38:13.552820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.160 [2024-10-30 12:38:13.552869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.553023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.553116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.553342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.553392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.553550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.553601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.553803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.553854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.554041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.554089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.554308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.554361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.554562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.554614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.554777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.554829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.555033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.555085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.555329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.555384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.555582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.555634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.555839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.555891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.556089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.556142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.556309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.556364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.556584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.556637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.556840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.556894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.557144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.557210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.557457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.557510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.557767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.557818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.558079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.558132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.558368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.558421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.558610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.558662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.558874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.558926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.559147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.559212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.161 qpair failed and we were unable to recover it. 00:26:41.161 [2024-10-30 12:38:13.559430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.161 [2024-10-30 12:38:13.559485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.559683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.559735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.559976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.560029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.560228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.560296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.560511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.560563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.560776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.560829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.561032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.561083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.561295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.561358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.561527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.561587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.561791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.561843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.562082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.562134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.562372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.562426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.562627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.562679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.562922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.562975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.563143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.563196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.563357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.563410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.563572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.563633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.563846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.563901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.564109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.564161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.564373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.564427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.564593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.564646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.564867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.564919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.565099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.565151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.565360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.565416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.565630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.565683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.565881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.565934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.566151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.566204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.566431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.566486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.566669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.566722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.566879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.566933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.567149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.567203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.162 [2024-10-30 12:38:13.567395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.162 [2024-10-30 12:38:13.567450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.162 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.567657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.567711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.567948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.568001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.568312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.568366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.568574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.568626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.568837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.568889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.569093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.569145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.569393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.569446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.569686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.569747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.569990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.570043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.570252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.570320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.570488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.570544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.570765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.570823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.571002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.571060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.571282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.571340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.571563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.571615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.571830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.571882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.572090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.572146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.572363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.572417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.572673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.572725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.572905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.572959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.573193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.573297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.573514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.573567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.573771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.573825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.574075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.574127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.574301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.574370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.574575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.574628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.574831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.574883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.575094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.575146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.575355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.575410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.575618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.163 [2024-10-30 12:38:13.575670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.163 qpair failed and we were unable to recover it. 00:26:41.163 [2024-10-30 12:38:13.575884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.575937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.576117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.576197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.576453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.576505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.576705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.576758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.576921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.576973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.577242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.577307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.577526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.577580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.577760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.577813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.578030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.578082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.578288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.578341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.578539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.578591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.578762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.578836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.579100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.579156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.579360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.579418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.579611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.579666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.579817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.579873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.580088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.580146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.580340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.580397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.580660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.580717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.580896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.580953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.581147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.581212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.581518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.581574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.581796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.581853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.582106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.582173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.582396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.582454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.582695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.582751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.582985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.583041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.583208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.583291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.583559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.583622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.583870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.583936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.164 [2024-10-30 12:38:13.584203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.164 [2024-10-30 12:38:13.584322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.164 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.584600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.584656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.584870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.584925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.585145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.585211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.585511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.585576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.585835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.585901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.586217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.586327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.586602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.586659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.586893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.586949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.587138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.587196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.587441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.587499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.587681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.587739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.588013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.588069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.588289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.588347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.588576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.588632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.588889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.588945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.589141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.589197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.589471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.589528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.589752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.589808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.590054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.590119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.590414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.590472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.590706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.590761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.590980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.591036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.591218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.591286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.591505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.591562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.591766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.591822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.591997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.592053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.592300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.592359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.592626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.592682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.592875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.592932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.593185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.593241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.593506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.593563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.593799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.165 [2024-10-30 12:38:13.593855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.165 qpair failed and we were unable to recover it. 00:26:41.165 [2024-10-30 12:38:13.594068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.594124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.594313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.594373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.594562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.594618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.594878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.594937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.595176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.595236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.595490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.595549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.595774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.595835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.596118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.596177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.596433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.596493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.596725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.596785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.597058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.597119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.597417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.597487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.597740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.597808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.598010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.598072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.598305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.598366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.598641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.598701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.598877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.598938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.599216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.599314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.599559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.599619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.599858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.599918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.600146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.600205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.600506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.600598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.600857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.600922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.601162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.601222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.601536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.601596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.601839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.601900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.602144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.602203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.602452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.602516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.602777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.602837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.603076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.603136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.603340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.166 [2024-10-30 12:38:13.603402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.166 qpair failed and we were unable to recover it. 00:26:41.166 [2024-10-30 12:38:13.603651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.603712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.604013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.604074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.604302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.604364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.604603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.604664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.604873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.604933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.605166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.605227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.605445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.605510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.605747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.605808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.606090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.606151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.606359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.606424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.606659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.606722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.606998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.607059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.607297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.607358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.607630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.607690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.607963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.608023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.608309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.608371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.608593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.608653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.608923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.608983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.609184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.609249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.609500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.609566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.609844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.609921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.610127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.610212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.610523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.610595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.610845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.610905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.611160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.611226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.611510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.167 [2024-10-30 12:38:13.611573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.167 qpair failed and we were unable to recover it. 00:26:41.167 [2024-10-30 12:38:13.611775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.611836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.612123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.612189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.612465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.612554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.612812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.612877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.613144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.613214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.613506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.613568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.613773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.613834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.614127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.614193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.614445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.614513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.614792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.614852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.615122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.615203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.615472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.615533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.615773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.615833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.616048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.616108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.616373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.616435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.616677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.616736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.616970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.617030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.617226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.617312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.617549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.617611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.617848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.617911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.618152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.618213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.618500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.618567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.618773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.618839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.619099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.619166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.619447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.619516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.619765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.619830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.620092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.620156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.620400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.620468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.620759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.620825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.621076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.621142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.168 qpair failed and we were unable to recover it. 00:26:41.168 [2024-10-30 12:38:13.621369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.168 [2024-10-30 12:38:13.621437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.621641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.621720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.621912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.621972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.622282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.622362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.622626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.622696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.622977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.623040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.623301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.623368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.623596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.623661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.623904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.623970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.624277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.624351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.624622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.624689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.624952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.625016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.625277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.625343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.625594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.625659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.625909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.625973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.626190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.626270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.626493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.626558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.626840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.626906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.627202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.627287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.627491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.627557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.627845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.627910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.628161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.628228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.628519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.628586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.628877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.628942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.629196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.629282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.629512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.629577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.629776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.629840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.630097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.630162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.630440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.630505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.630767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.630832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.631119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.169 [2024-10-30 12:38:13.631184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.169 qpair failed and we were unable to recover it. 00:26:41.169 [2024-10-30 12:38:13.631518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.631587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.631804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.631871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.632122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.632188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.632450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.632517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.632759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.632824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.633068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.633133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.633432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.633498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.633798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.633863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.634121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.634186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.634492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.634570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.634872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.634938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.635197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.635280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.635546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.635612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.635862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.635928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.636192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.636270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.636529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.636594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.636890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.636957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.637241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.637325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.637522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.637590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.637798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.637865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.638152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.638216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.638493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.638559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.638779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.638846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.639092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.639159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.639434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.639501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.639760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.639826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.640071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.640139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.640490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.640557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.640850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.640916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.641202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.641286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.641501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.641566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.170 qpair failed and we were unable to recover it. 00:26:41.170 [2024-10-30 12:38:13.641821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.170 [2024-10-30 12:38:13.641886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.642098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.642163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.642398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.642466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.642761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.642825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.643029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.643094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.643342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.643410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.643667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.643731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.643939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.644004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.644254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.644333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.644552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.644635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.644925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.644991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.645284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.645350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.645556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.645621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.645910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.645974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.646276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.646343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.646631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.646696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.646939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.647006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.647270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.647339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.647639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.647703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.648014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.648079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.648305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.648374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.648639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.648705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.648963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.649029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.649336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.649403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.649648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.649714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.649975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.650040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.650285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.650352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.650558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.650625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.650863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.650929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.651129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.651195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.651518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.171 [2024-10-30 12:38:13.651585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.171 qpair failed and we were unable to recover it. 00:26:41.171 [2024-10-30 12:38:13.651835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.651900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.652171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.652237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.652554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.652619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.652826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.652891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.653099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.653165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.653495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.653561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.653821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.653886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.654186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.654251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.654529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.654594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.654852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.654916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.655212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.655310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.655608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.655673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.655926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.655992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.656273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.656341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.656537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.656601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.656897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.656962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.657166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.657234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.657503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.657569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.657856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.657931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.658190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.658273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.658530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.658594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.658835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.658900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.659195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.659278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.659546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.659610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.659856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.659923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.172 qpair failed and we were unable to recover it. 00:26:41.172 [2024-10-30 12:38:13.660172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.172 [2024-10-30 12:38:13.660238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.660506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.660572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.660807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.660872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.661062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.661126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.661411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.661479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.661722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.661786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.662009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.662073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.662388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.662454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.662733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.662798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.663051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.663117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.663359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.663425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.663638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.663705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.664003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.664069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.664287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.664356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.664560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.664624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.664874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.664938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.665129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.665194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.665466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.665533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.665834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.665899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.666155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.666221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.666520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.666584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.666839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.666905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.667208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.667292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.667512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.667577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.667830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.667897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.668097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.668163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.668474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.668541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.668796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.668862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.669144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.669208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.669471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.669536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.669799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.173 [2024-10-30 12:38:13.669865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.173 qpair failed and we were unable to recover it. 00:26:41.173 [2024-10-30 12:38:13.670068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.670134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.670368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.670435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.670690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.670766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.671015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.671080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.671369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.671436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.671691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.671759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.672049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.672114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.672415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.672482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.672784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.672850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.673142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.673206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.673514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.673579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.673775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.673843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.674095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.674162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.674472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.674539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.674832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.674896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.675192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.675275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.675539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.675605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.675867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.675933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.676172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.676237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.676511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.676578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.676867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.676932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.677201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.677284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.677492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.677557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.677856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.677920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.678210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.678293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.678519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.678585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.678880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.678945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.679238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.679339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.679596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.679661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.679916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.679981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.174 [2024-10-30 12:38:13.680226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.174 [2024-10-30 12:38:13.680311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.174 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.680564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.680631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.680934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.681000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.681281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.681348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.681597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.681664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.681926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.681992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.682291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.682358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.682650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.682714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.683024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.683089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.683310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.683379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.683579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.683645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.683868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.683934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.684154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.684230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.684525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.684591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.684880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.684947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.685192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.685273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.685487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.685552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.685790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.685857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.686142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.686206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.686562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.686659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.686959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.687028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.687250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.687334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.687637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.687701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.687917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.687984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.688233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.688318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.688611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.688676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.688893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.688959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.689172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.689238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.689544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.689609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.689900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.689964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.690182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.690247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.690558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.690623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.690910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.690974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.175 qpair failed and we were unable to recover it. 00:26:41.175 [2024-10-30 12:38:13.691189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.175 [2024-10-30 12:38:13.691252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.691485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.691549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.691792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.691858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.692159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.692224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.692530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.692595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.692881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.692944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.693197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.693282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.693555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.693620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.693819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.693882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.694129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.694193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.694505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.694570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.694814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.694880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.695188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.695252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.695513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.695578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.695840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.695902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.696159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.696222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.696529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.696594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.696802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.696870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.697078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.697142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.697438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.697516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.697772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.697837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.698102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.698166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.698471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.698536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.698748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.698813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.699056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.699123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.699390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.699456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.699717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.699781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.700087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.700150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.700424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.700489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.700726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.700790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.701099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.701162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.701479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.701543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.176 qpair failed and we were unable to recover it. 00:26:41.176 [2024-10-30 12:38:13.701836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.176 [2024-10-30 12:38:13.701901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.702202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.702277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.702572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.702636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.702937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.703001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.703253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.703331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.703584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.703648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.703935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.703998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.704302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.704367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.704619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.704683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.704897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.704961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.705187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.705252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.705569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.705633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.705926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.705991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.706237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.706322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.706576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.706640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.706925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.706989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.707288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.707354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.707603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.707666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.707953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.708017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.708285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.708354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.708603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.708667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.708923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.708991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.709238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.709331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.709585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.709652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.709861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.709926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.710174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.710239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.710528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.710593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.710829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.710903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.711126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.711191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.711456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.711524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.711815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.711878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.177 [2024-10-30 12:38:13.712124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-10-30 12:38:13.712188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.177 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.712493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.712559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.712862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.712926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.713227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.713312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.713534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.713600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.713888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.713952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.714252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.714335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.714593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.714659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.714908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.714973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.715225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.715310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.715628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.715692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.715979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.716043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.716349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.716415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.716627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.716692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.717001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.717069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.717360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.717427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.717690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.717755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.717964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.718028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.718278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.718345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.718573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.718638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.718826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.718891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.719178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.719242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.719523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.719587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.719843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.719907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.720158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.720222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.720494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.720559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.720852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.720918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.178 qpair failed and we were unable to recover it. 00:26:41.178 [2024-10-30 12:38:13.721225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-10-30 12:38:13.721316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.721584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.721648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.721889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.721953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.722177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.722244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.722563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.722628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.722828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.722891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.723143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.723206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.723487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.723555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.723855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.723918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.724216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.724306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.724575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.724640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.724891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.724954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.725195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.725275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.725483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.725547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.725758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.725825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.726123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.726187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.726492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.726556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.726772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.726835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.727120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.727184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.727499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.727564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.727775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.727842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.728137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.728201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.728508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.728574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.728833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.728900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.729162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.729226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.729527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.729591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.729846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.729909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.730150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.730214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.730448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.730513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.730805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.730868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.731122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.179 [2024-10-30 12:38:13.731186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.179 qpair failed and we were unable to recover it. 00:26:41.179 [2024-10-30 12:38:13.731491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.731557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.731862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.731925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.732188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.732253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.732514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.732577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.732838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.732902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.733172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.733236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.733543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.733606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.733852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.733915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.734180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.734245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.734535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.734599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.734853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.734917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.735181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.735245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.735550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.735615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.735860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.735923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.736177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.736242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.736457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.736521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.736736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.736800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.737052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.737116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.737406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.737482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.737778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.737842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.738091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.738156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.738474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.738539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.738784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.738848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.739050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.739118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.739405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.739471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.739721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.739787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.740078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.740142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.740410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.740475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.740777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.740841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.180 [2024-10-30 12:38:13.741045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.180 [2024-10-30 12:38:13.741110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.180 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.741393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.741459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.741713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.741779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.742057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.742122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.742417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.742484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.742784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.742848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.743147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.743210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.743524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.743589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.743875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.743939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.744231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.744311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.744605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.744669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.744931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.744995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.745251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.745346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.745612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.745676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.745975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.746039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.746248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.746330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.746602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.746666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.746871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.746937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.747232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.747312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.747559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.747622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.747914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.747977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.748226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.748308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.748563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.748626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.748878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.748941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.749250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.749332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.749590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.749653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.749910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.749973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.750224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.750320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.750572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.750639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.750895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.750970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.751289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.751356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.751620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.751685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.751968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.752031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.752287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.181 [2024-10-30 12:38:13.752353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.181 qpair failed and we were unable to recover it. 00:26:41.181 [2024-10-30 12:38:13.752609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.752673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.752921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.752985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.753300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.753365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.753578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.753642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.753931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.753994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.754236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.754324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.754515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.754579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.754786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.754852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.755120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.755185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.755499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.755565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.755852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.755916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.756178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.756242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.756512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.756577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.756881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.756945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.757167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.757230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.757512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.757576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.757872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.757936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.758180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.758246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.758540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.758604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.758856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.758922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.759145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.759211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.759520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.759585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.759880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.759945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.760194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.760276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.760585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.760648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.760942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.761006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.761205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.761285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.761591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.761656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.761946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.762010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.762230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.762322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.182 qpair failed and we were unable to recover it. 00:26:41.182 [2024-10-30 12:38:13.762602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.182 [2024-10-30 12:38:13.762666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.762908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.762972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.763220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.763302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.763546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.763613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.763908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.763973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.764282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.764358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.764607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.764674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.764972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.765036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.765305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.765371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.765619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.765693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.765993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.766058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.766238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.766328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.766620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.766685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.766975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.767039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.767306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.767371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.767626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.767693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.767939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.768003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.768291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.768356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.768654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.768718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.769026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.769091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.769353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.769418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.769674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.769739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.770034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.770098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.770317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.770384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.770631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.770695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.770913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.770977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.771225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.771304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.771541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.771605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.183 [2024-10-30 12:38:13.771852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.183 [2024-10-30 12:38:13.771916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.183 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.772217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.772293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.772595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.772658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.772951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.773015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.773290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.773355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.773618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.773681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.773895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.773959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.774212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.774291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.774593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.774658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.774866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.774931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.775188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.775252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.775517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.775581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.775872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.775937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.776241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.776319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.776610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.776674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.776885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.776949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.777247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.777325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.777614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.777688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.777934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.777998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.778248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.778338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.778621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.778684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.778930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.778995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.779233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.779314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.779575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.779638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.779848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.779915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.780205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.780285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.780592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.780656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.780907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.780971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.781229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.781308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.184 qpair failed and we were unable to recover it. 00:26:41.184 [2024-10-30 12:38:13.781571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.184 [2024-10-30 12:38:13.781635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.781881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.781946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.782210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.782290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.782558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.782621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.782867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.782934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.783233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.783314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.783551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.783616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.783859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.783924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.784129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.784194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.784465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.784530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.784791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.784855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.785157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.785220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.785491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.785555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.785842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.785907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.786152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.786216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.786547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.786613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.786863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.786928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.787183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.787247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.787486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.787551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.787804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.787867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.788125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.788189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.788446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.788512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.788760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.788824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.789071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.789135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.789352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.789417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.789718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.789781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.790034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.790098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.790340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.790404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.790660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.790734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.791037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.791101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.791394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.791459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.791707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.791770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.792060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.792123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.185 [2024-10-30 12:38:13.792377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.185 [2024-10-30 12:38:13.792451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.185 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.792737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.792801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.793090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.793155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.793432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.793497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.793747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.793810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.794104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.794170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.794461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.794526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.794813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.794877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.795143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.795215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.795489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.795554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.795806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.795870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.796071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.796132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.796417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.796483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.796738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.796806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.797099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.797165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.797390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.797458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.797701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.797767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.186 [2024-10-30 12:38:13.798060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.186 [2024-10-30 12:38:13.798124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.186 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.798319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.798381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.798576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.798637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.798829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.798891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.799177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.799240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.799556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.799648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.799929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.799999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.800278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.800347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.800588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.800652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.800939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.801003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.801287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.801353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.801605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.801669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.801915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.801978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.802151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.802215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.802437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.802501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.802725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.802792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.803008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.803073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.803354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.803420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.803684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.803748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.804011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.804074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.804291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.804356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.804607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.804671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.804859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.804921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.805169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.805233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.805544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.805610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.805920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.805983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.806231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.806310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.806609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.806673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.806975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.807038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.807337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.807402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.807610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.807674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.807891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.807954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.808200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.808287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.808548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.808613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.808913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.808976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.809225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.809301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.809545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.809609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.456 [2024-10-30 12:38:13.809816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.456 [2024-10-30 12:38:13.809879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.456 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.810162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.810225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.810449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.810513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.810823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.810889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.811150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.811215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.811492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.811557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.811813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.811878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.812077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.812140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.812389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.812455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.812756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.812820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.813044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.813108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.813358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.813423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.813646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.813709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.813960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.814027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.814292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.814358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.814609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.814673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.814955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.815019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.815284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.815348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.815565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.815629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.815933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.815997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.816231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.816315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.816588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.816653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.816938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.817014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.817306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.817374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.817651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.817715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.817917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.817981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.818237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.818325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.818550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.818614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.818899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.818964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.819251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.819331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.819623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.819686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.819975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.820039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.820307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.820373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.820668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.820731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.820987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.821051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.821298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.821364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.821629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.821695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.821986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.822051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.822319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.822386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.822580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.822644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.822933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.822996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.823289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.823355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.823625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.823688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.823940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.824003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.824275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.824342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.824558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.824622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.824871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.824936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.457 [2024-10-30 12:38:13.825224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.457 [2024-10-30 12:38:13.825346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.457 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.825643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.825707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.825931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.825996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.826302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.826369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.826658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.826722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.826974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.827039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.827296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.827362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.827656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.827721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.827969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.828033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.828311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.828376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.828626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.828690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.828936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.828999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.829203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.829285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.829535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.829600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.829886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.829949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.830143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.830206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.830451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.830516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.830777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.830841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.831099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.831163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.831478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.831544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.831796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.831859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.832147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.832215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.832508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.832578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.832870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.832934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.833179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.833243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.833541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.833606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.833856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.833918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.834207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.834293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.834540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.834605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.834857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.834921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.835220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.835307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.835510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.835576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.835876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.835940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.836229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.836315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.836544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.836608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.836901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.836965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.837218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.837304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.837551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.837616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.837801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.837868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.838104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.838168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.838408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.838475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.838719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.838784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.839074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.839138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.839367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.839444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.839691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.839755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.839976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.840040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.840296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.840364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.840605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.458 [2024-10-30 12:38:13.840669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.458 qpair failed and we were unable to recover it. 00:26:41.458 [2024-10-30 12:38:13.840956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.841021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.841327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.841393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.841606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.841670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.841923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.841986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.842288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.842353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.842557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.842623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.842866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.842929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.843181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.843246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.843516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.843581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.843842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.843908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.844167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.844231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.844467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.844533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.844776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.844843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.845089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.845154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.845430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.845497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.845743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.845807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.846091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.846155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.846425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.846491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.846750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.846814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.847066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.847129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.847417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.847483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.847734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.847798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.848038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.848113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.848373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.848458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.848712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.848778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.848964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.849028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.849313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.849381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.849642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.849708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.849955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.850023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.850277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.850343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.850588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.850654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.850895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.850961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.851245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.851329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.851580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.851645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.851943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.852008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.852298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.852364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.852639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.852704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.852999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.853063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.853319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.853385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.853643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.853708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.853958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.854021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.459 qpair failed and we were unable to recover it. 00:26:41.459 [2024-10-30 12:38:13.854286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.459 [2024-10-30 12:38:13.854352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.854606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.854670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.854910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.854973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.855222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.855307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.855547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.855612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.855869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.855932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.856221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.856306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.856512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.856579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.856879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.856959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.857253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.857335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.857560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.857625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.857861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.857924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.858154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.858218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.858495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.858560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.858857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.858922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.859221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.859321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.859623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.859688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.859938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.860003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.860204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.860288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.860551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.860616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.860897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.860962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.861246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.861330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.861637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.861703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.861993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.862057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.862305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.862372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.862625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.862689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.862980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.863043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.863245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.863325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.863591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.863656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.863962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.864026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.864315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.864382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.864686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.864750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.865001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.865064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.865310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.865376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.865591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.865658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.865909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.865974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.866237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.866320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.866607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.866673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.866918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.866982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.867308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.867373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.867571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.867636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.867926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.867990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.868224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.868309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.868512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.868577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.868827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.868891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.869121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.869185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.869455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.869520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.460 [2024-10-30 12:38:13.869748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.460 [2024-10-30 12:38:13.869811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.460 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.870101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.870165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.870446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.870512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.870806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.870869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.871079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.871143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.871358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.871423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.871734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.871798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.872057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.872120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.872318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.872383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.872671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.872734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.872996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.873060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.873360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.873425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.873735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.873799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.874050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.874114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.874377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.874442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.874745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.874809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.875083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.875148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.875430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.875497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.875742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.875807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.876021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.876085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.876337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.876404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.876702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.876766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.877021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.877089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.877349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.877416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.877710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.877775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.878021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.878085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.878331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.878399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.878630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.878695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.878986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.879050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.879323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.879399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.879698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.879763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.879969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.880033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.880288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.880354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.880656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.880720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.881016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.881080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.881331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.881396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.881690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.881754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.881960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.882024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.882285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.882349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.882592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.882655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.882917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.882985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.883247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.883343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.883594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.883659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.883927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.883992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.884286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.884352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.884646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.884709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.885006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.885070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.885339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.461 [2024-10-30 12:38:13.885405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.461 qpair failed and we were unable to recover it. 00:26:41.461 [2024-10-30 12:38:13.885615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.885678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.885970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.886034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.886342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.886408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.886715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.886778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.886959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.887022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.887313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.887380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.887675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.887739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.888006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.888070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.888370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.888446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.888699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.888762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.889015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.889078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.889366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.889432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.889726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.889789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.890089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.890154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.890400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.890470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.890678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.890742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.890956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.891020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.891315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.891390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.891591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.891655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.891899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.891965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.892279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.892346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.892659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.892722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.893021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.893085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.893342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.893409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.893706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.893770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.894016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.894080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.894252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.894331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.894561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.894625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.894929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.894992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.895237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.895318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.895621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.895684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.895951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.896015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.896283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.896349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.896646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.896709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.896963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.897026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.897308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.897375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.897644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.897708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.897960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.898024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.898276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.898341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.898658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.898722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.898961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.899025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.899310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.899377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.899680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.899744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.900036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.900100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.900392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.900457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.900765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.900829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.901126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.901190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.462 qpair failed and we were unable to recover it. 00:26:41.462 [2024-10-30 12:38:13.901506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.462 [2024-10-30 12:38:13.901572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.901874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.901937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.902189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.902254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.902492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.902555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.902802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.902865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.903164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.903227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.903496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.903559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.903790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.903853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.904141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.904204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.904472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.904536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.904834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.904897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.905209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.905292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.905555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.905619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.905824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.905888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.906172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.906236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.906510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.906579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.906849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.906915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.907156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.907220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.907537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.907601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.907901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.907964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.908277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.908342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.908604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.908668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.908875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.908938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.909170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.909235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.909540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.909604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.909893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.909956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.910199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.910281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.910485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.910549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.910843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.910911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.911214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.911324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.911603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.911667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.911980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.912045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.912350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.912417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.912686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.912750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.913056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.913119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.913341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.913406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.913665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.913728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.914024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.914087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.914393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.914458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.914709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.914775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.915056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.915119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.915373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.915438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.915694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.915759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.463 qpair failed and we were unable to recover it. 00:26:41.463 [2024-10-30 12:38:13.916070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.463 [2024-10-30 12:38:13.916134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.916398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.916464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.916715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.916779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.917030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.917094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.917394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.917459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.917762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.917825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.918078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.918143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.918376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.918441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.918643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.918706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.918928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.918992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.919243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.919324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.919580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.919644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.919890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.919953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.920237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.920340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.920642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.920705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.920959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.921025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.921317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.921384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.921659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.921722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.922017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.922080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.922323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.922389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.922642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.922706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.923005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.923068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.923355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.923420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.923720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.923783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.924028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.924091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.924359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.924424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.924637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.924699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.924957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.925021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.925243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.925337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.925636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.925699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.925963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.926027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.926324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.926390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.926706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.926769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.926965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.927029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.927277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.927344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.927582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.927646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.927941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.928005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.928299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.928364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.928595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.928658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.928954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.929017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.929270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.929344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.929632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.929696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.929996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.930059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.930316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.930380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.930582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.930645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.930886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.930949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.931251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.931340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.464 qpair failed and we were unable to recover it. 00:26:41.464 [2024-10-30 12:38:13.931586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.464 [2024-10-30 12:38:13.931650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.931903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.931969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.932186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.932250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.932520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.932585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.932848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.932911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.933203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.933283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.933512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.933577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.933879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.933943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.934246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.934327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.934580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.934643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.934928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.934992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.935300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.935365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.935558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.935621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.935913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.935976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.936281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.936346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.936592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.936655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.936891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.936956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.937212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.937291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.937563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.937627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.937911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.937975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.938236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.938316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.938566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.938630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.938884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.938947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.939189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.939254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.939521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.939585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.939880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.939944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.940243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.940330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.940564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.940627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.940879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.940942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.941249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.941334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.941628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.941693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.941946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.942009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.942253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.942335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.942591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.942655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.942905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.942979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.943313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.943378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.943670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.943734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.944022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.944087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.944332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.944397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.944620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.944684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.944927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.944991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.945277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.945341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.945645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.945710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.946003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.946067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.946372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.946438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.946639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.946703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.946995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.947058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.947312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.465 [2024-10-30 12:38:13.947378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.465 qpair failed and we were unable to recover it. 00:26:41.465 [2024-10-30 12:38:13.947604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.947668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.947931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.947996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.948291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.948355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.948619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.948682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.948952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.949016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.949315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.949380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.949578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.949642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.949930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.949994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.950298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.950364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.950618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.950682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.950934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.950997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.951326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.951392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.951639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.951702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.951949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.952026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.952251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.952330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.952601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.952665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.952912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.952975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.953215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.953294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.953564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.953627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.953860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.953924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.954178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.954242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.954519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.954584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.954786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.954850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.955073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.955137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.955405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.955471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.955715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.955778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.955969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.956033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.956253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.956332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.956589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.956652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.956895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.956957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.957239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.957320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.957521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.957583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.957830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.957893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.958181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.958245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.958482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.958547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.958760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.958823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.959114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.959177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.959439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.959503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.959802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.959865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.960104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.960167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.960438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.960513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.960767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.960831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.961044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.961108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.961360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.961424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.961676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.961740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.962041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.962105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.962361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.466 [2024-10-30 12:38:13.962426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.466 qpair failed and we were unable to recover it. 00:26:41.466 [2024-10-30 12:38:13.962623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.962686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.962973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.963036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.963283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.963348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.963627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.963691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.963948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.964012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.964273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.964338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.964589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.964654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.964960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.965023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.965329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.965394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.965699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.965763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.966000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.966064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.966321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.966386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.966629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.966694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.966954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.967016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.967289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.967353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.967645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.967709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.967959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.968022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.968285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.968351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.968647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.968711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.968971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.969033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.969290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.969356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.969660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.969725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.969938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.970002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.970232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.970312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.970530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.970596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.970880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.970944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.971200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.971278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.971535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.971606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.971856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.971920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.972147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.972210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.972486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.972550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.972854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.972918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.973225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.973309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.973610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.973674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.973931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.973995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.974232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.974324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.974577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.974640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.974893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.974956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.975252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.975356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.975605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.975668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.975956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.467 [2024-10-30 12:38:13.976020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.467 qpair failed and we were unable to recover it. 00:26:41.467 [2024-10-30 12:38:13.976231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.976323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.976619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.976682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.976978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.977042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.977310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.977377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.977564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.977628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.977883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.977946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.978202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.978281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.978544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.978607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.978871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.978934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.979222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.979317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.979611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.979673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.979887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.979950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.980207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.980286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.980544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.980608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.980837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.980901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.981205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.981288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.981542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.981608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.981861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.981925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.982170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.982234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.982546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.982609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.982907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.982985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.983291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.983358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.983614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.983678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.983876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.983939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.984175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.984238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.984557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.984621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.984832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.984896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.985143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.985206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.985515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.985580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.985811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.985876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.986129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.986192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.986484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.986549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.986853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.986917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.987184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.987247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.987539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.987605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.987823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.987888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.988189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.988252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.988576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.988640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.988932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.988996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.989305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.989371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.989625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.989690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.989891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.989955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.990240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.990326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.990589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.990655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.990854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.990920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.991214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.991293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.468 qpair failed and we were unable to recover it. 00:26:41.468 [2024-10-30 12:38:13.991552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.468 [2024-10-30 12:38:13.991616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.991919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.992000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.992270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.992336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.992581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.992645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.992905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.992969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.993225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.993309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.993533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.993595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.993792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.993853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.994104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.994166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.994373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.994434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.994653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.994714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.994991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.995052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.995302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.995364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.995562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.995623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.995871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.995932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.996230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.996310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.996559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.996620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.996858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.996919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.997154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.997216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.997543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.997605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.997841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.997903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.998116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.998176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.998440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.998503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.998746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.998809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.999085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.999146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.999352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.999415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.999693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:13.999754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:13.999990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.000051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.000246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.000340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.000569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.000630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.000833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.000898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.001141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.001206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.001481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.001545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.001782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.001846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.002048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.002112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.002364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.002430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.002679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.002745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.002939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.003005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.003404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.003469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.003712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.003779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.004071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.004136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.004396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.004460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.004807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.004907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.005175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.005245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.005529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.005596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.005840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.005907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.006193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.006270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.469 qpair failed and we were unable to recover it. 00:26:41.469 [2024-10-30 12:38:14.006542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.469 [2024-10-30 12:38:14.006607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.006857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.006923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.007180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.007248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.007495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.007560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.007846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.007911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.008200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.008277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.008528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.008593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.008854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.008918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.009191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.009292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.009525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.009592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.009887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.009952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.010204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.010283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.010489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.010556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.010826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.010898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.011109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.011176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.011424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.011492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.011785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.011849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.012133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.012197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.012543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.012610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.012875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.012940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.013236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.013320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.013559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.013626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.013892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.013957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.014244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.014329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.014585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.014650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.014956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.015020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.015244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.015327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.015593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.015657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.015901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.015966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.016224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.016314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.016607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.016672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.016923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.016987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.017291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.017358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.017656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.017719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.018017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.018081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.018304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.018382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.018621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.018685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.018918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.018982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.019229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.019310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.019608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.019671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.019863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.019929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.020149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.020212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.020472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.020537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.020774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.020839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.021037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.021100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.021390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.021457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.021656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.470 [2024-10-30 12:38:14.021722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.470 qpair failed and we were unable to recover it. 00:26:41.470 [2024-10-30 12:38:14.021957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.022021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.022290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.022357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.022611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.022677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.022910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.022973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.023212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.023293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.023501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.023568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.023815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.023879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.024140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.024204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.024520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.024586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.024839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.024904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.025162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.025225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.025519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.025584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.025838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.025903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.026142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.026206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.026439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.026504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.026789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.026854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.027166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.027230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.027472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.027538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.027832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.027897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.028144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.028208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.028433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.028498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.028713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.028778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.029028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.029092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.029353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.029420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.029645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.029708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.029940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.030004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.030248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.030330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.030573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.030637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.030834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.030898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.031204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.031285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.031578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.031641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.031887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.031951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.032216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.032305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.032520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.032586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.032832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.032899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.033145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.033210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.033523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.033589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.033892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.033957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.034235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.034321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.034613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.034676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.034930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.034995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.035272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.035339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.035598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.035661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.471 qpair failed and we were unable to recover it. 00:26:41.471 [2024-10-30 12:38:14.035974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.471 [2024-10-30 12:38:14.036040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.036242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.036339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.036547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.036611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.036811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.036874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.037134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.037199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.037439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.037505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.037810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.037875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.038120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.038187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.038466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.038532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.038821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.038885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.039180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.039245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.039512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.039576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.039879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.039943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.040150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.040227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.040515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.040580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.040794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.040858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.041101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.041165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.041408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.041475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.041764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.041828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.042083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.042147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.042458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.042525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.042814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.042878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.043082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.043146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.043368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.043435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.043719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.043784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.044088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.044152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.044437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.044502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.044755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.044822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.045071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.045134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.045353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.045419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.045627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.045693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.045946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.046009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.046236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.046319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.046624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.046688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.046911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.046975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.047280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.047346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.047638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.047702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.047962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.048026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.048282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.048350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.048613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.048680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.048932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.049009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.049200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.049297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.049546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.049612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.049871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.049936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.050185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.050250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.050569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.050633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.050873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.050938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.472 qpair failed and we were unable to recover it. 00:26:41.472 [2024-10-30 12:38:14.051181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.472 [2024-10-30 12:38:14.051245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.051524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.051589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.051793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.051857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.052098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.052162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.052475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.052541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.052828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.052891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.053134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.053202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.053524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.053590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.053788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.053854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.054105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.054170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.054457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.054524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.054822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.054885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.055174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.055239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.055522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.055587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.055805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.055870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.056110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.056175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.056451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.056518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.056804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.056868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.057160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.057225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.057466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.057534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.057727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.057791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.058076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.058141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.058404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.058470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.058763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.058828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.059018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.059082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.059334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.059400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.059651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.059715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.060007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.060071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.060367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.060432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.060693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.060757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.061058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.061123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.061382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.061448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.061696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.061760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.062002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.062067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.062297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.062363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.062587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.062652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.062945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.063011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.063302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.063367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.063641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.063706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.063963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.064028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.064231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.064308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.064571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.064635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.064883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.064948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.065246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.065343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.065591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.065656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.065909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.065974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.066282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.066349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.066641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.473 [2024-10-30 12:38:14.066707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.473 qpair failed and we were unable to recover it. 00:26:41.473 [2024-10-30 12:38:14.067021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.067085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.067385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.067451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.067665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.067730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.068031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.068095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.068388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.068454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.068748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.068823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.069034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.069098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.069350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.069417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.069704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.069767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.070009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.070074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.070336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.070402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.070614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.070678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.070925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.070989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.071293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.071371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.071659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.071723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.071962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.072026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.072324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.072391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.072654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.072718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.073004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.073068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.073366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.073433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.073691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.073756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.074049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.074112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.074427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.074493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.074729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.074795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.075076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.075140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.075396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.075462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.075686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.075751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.076012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.076076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.076295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.076362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.076652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.076718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.076969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.077033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.077376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.077451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.077674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.077739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.077977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.078040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.078345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.078410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.078674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.078737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.079042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.079106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.079395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.079461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.079756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.079819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.080085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.080149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.080462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.080539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.080778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.080842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.081126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.474 [2024-10-30 12:38:14.081189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.474 qpair failed and we were unable to recover it. 00:26:41.474 [2024-10-30 12:38:14.081518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.081584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.081800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.081867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.082108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.082173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.082466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.082531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.082783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.082848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.083121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.083190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.083455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.083521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.083815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.083879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.084125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.084189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.084465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.084530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.084823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.084887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.085152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.085218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.085492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.085557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.085854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.085920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.086177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.086241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.086509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.086571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.086821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.086887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.087109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.087174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.087441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.087506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.087796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.087860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.088153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.088216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.088487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.088554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.088846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.088912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.089163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.089228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.089479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.089556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.089852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.089916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.090167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.090231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.090489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.090557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.090779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.090846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.091084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.091148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.091399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.091465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.091706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.091772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.092065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.092128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.092416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.092482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.092752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.092817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.093091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.093156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.093464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.093531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.093759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.093822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.094036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.094103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.094436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.094503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.094764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.094827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.095128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.095191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.095428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.095493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.095740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.095806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.096091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.096155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.096437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.096503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.096795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.475 [2024-10-30 12:38:14.096859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.475 qpair failed and we were unable to recover it. 00:26:41.475 [2024-10-30 12:38:14.097171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.097235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.097530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.097594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.097836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.097900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.098139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.098203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.098481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.098546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.098858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.098921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.099169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.099233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.099504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.099569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.099767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.099830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.100088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.100151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.100384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.100452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.100707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.100769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.101063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.101126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.101374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.101440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.101735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.101798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.102012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.102075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.102290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.102356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.102613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.102676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.102928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.102992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.103272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.103337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.103569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.103632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.103819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.103884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.104089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.104153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.104465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.104530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.104786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.104850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.105131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.105193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.105520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.105586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.105839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.105905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.106140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.106204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.106518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.106582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.106883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.106947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.107198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.107282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.107602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.107667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.107932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.107996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.108291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.108356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.108618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.108681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.108911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.108975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.109235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.109330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.109534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.109597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.109892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.109955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.110153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.110218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.110530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.110594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.110845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.110925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.111221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.111300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.111561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.111624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.111850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.111924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.112225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.112306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.476 [2024-10-30 12:38:14.112557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.476 [2024-10-30 12:38:14.112624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.476 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.112858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.112921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.113177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.113242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.113559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.113623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.113924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.113987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.114235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.114321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.114600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.114664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.114923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.114986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.115235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.115318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.115575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.115639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.115883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.115948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.116237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.116321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.116550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.116614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.116904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.116966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.117172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.117236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.117593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.117657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.117914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.117977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.118244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.118328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.118553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.118614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.118904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.118967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.119280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.119345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.119576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.119639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.119929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.119992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.120280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.120345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.120641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.120705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.120993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.121067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.121326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.121393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.121683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.121746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.122031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.122094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.122358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.122425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.122686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.122750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.123038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.123101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.123315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.123379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.123623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.123688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.123964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.124027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.124238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.124316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.124526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.124589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.124884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.124947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.125199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.125299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.125584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.125650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.125899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.125964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.126216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.126301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.126563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.126627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.126881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.126944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.127252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.127336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.127587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.127649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.477 [2024-10-30 12:38:14.127895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.477 [2024-10-30 12:38:14.127958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.477 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.130459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.130560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.130896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.130966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.131205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.131291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.131565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.131633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.131885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.131950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.132214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.132300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.132532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.132597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.132858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.132923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.133178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.133241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.133518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.133583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.133788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.133852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.134092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.134154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.134415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.134481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.134737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.134801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.135104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.135166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.135502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.135569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.135767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.135832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.136078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.136140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.136386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.136453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.136713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.136781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.137042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.137107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.137393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.137459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.137719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.137782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.138030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.138094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.138341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.138408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.138597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.138661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.138959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.139023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.752 qpair failed and we were unable to recover it. 00:26:41.752 [2024-10-30 12:38:14.139278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.752 [2024-10-30 12:38:14.139343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.139592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.139655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.139947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.140011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.140221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.140305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.140559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.140623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.140868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.140932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.141163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.141227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.141507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.141573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.141871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.141936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.142161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.142225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.142545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.142609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.142860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.142924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.143170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.143234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.143522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.143585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.143888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.143952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.144247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.144332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.144632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.144695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.144919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.144983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.145198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.145282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.145573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.145648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.145893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.145958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.146212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.146295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.146587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.146650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.146951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.147015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.147221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.147309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.147569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.147632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.147921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.147984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.148186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.148251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.148531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.148595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.148849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.148913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.149209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.149288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.149524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.149587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.149793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.149856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.150179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.150242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.150529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.150594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.150886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.150950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.151203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.151297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.151536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.151600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.151858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.151921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.152144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.152207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.152490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.152554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.152801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.152863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.153074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.153138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.153400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.153467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.153765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.153827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.154119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.154183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.154439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.154515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.154769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.753 [2024-10-30 12:38:14.154832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.753 qpair failed and we were unable to recover it. 00:26:41.753 [2024-10-30 12:38:14.155058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.155125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.155385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.155451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.155734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.155797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.156100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.156164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.156403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.156469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.156716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.156779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.157076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.157140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.157389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.157455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.157674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.157740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.157985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.158051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.158276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.158342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.158595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.158659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.158958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.159023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.159338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.159403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.159663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.159727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.159967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.160031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.160325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.160390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.160690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.160753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.161049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.161113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.161379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.161444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.161647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.161710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.161966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.162030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.162231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.162313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.162607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.162670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.162931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.162995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.163291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.163367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.163640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.163704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.163958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.164024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.164281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.164344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.164621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.164685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.164897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.164960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.165203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.165286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.165542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.165606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.165799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.165862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.166094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.166157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.166406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.166471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.166760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.166824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.167082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.167146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.167381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.167446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.167743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.167807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.168063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.168127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.168398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.168465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.168756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.168819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.169073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.169137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.169372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.169438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.169687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.169753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.170007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.170072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.170334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.170399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.170652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.170715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.754 [2024-10-30 12:38:14.171010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.754 [2024-10-30 12:38:14.171073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.754 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.171344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.171410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.171663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.171726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.171985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.172048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.172319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.172384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.172637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.172700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.172944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.173007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.173251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.173327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.173613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.173677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.173974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.174038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.174335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.174399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.174620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.174683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.174973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.175036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.175284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.175348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.175598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.175662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.175871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.175937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.176190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.176253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.176509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.176573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.176822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.176887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.177117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.177179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.177448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.177513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.177766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.177830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.178142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.178205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.178424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.178491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.178779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.178842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.179140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.179204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.179517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.179581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.179835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.179898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.180195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.180279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.180535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.180598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.180851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.180915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.181122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.181187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.181497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.181562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.181773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.181837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.182043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.182106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.182343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.182410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.182679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.182743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.183004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.183086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.183306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.183373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.183636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.183700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.183948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.184011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.184303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.184369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.184665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.184729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.184978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.185042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.185330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.185406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.185697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.185761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.186051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.755 [2024-10-30 12:38:14.186115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.755 qpair failed and we were unable to recover it. 00:26:41.755 [2024-10-30 12:38:14.186417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.186482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.186731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.186795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.187034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.187097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.187344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.187409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.187713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.187777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.188021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.188085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.188374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.188441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.188738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.188800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.189050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.189117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.189380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.189445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.189695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.189759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.190061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.190125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.190372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.190439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.190679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.190742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.191005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.191069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.191346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.191412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.191671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.191735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.192023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.192086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.192364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.192430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.192727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.192791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.193048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.193112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.193320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.193384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.193674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.193737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.193932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.193998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.194278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.194354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.194662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.194726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.195028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.195091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.195345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.195410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.195683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.195747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.196050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.196114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.196408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.196473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.196775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.196839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.197083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.197150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.197389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.197453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.197640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.197709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.197974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.198040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 724799 Killed "${NVMF_APP[@]}" "$@" 00:26:41.756 [2024-10-30 12:38:14.198339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.198404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.198675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.198755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.198976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.199040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:41.756 [2024-10-30 12:38:14.199336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.199401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:41.756 [2024-10-30 12:38:14.199711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:41.756 [2024-10-30 12:38:14.199775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.200021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:41.756 [2024-10-30 12:38:14.200084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.200327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.756 [2024-10-30 12:38:14.200392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.200649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.200712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.200964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.201027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.201285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.201351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.201648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.201713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.201939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.202002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.202218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.202300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.202613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.756 [2024-10-30 12:38:14.202678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.756 qpair failed and we were unable to recover it. 00:26:41.756 [2024-10-30 12:38:14.202949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.203013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.203329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.203397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.203706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.203771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.204028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.204091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.204351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.204416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.204672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.204736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.204936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.204999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.205352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.205419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.205682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.205746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=725354 00:26:41.757 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:41.757 [2024-10-30 12:38:14.206043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.206108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 725354 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.206311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.206378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 725354 ']' 00:26:41.757 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.757 [2024-10-30 12:38:14.206677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.206744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:41.757 [2024-10-30 12:38:14.207008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.757 [2024-10-30 12:38:14.207072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:41.757 [2024-10-30 12:38:14.207369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.757 [2024-10-30 12:38:14.207434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.207735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.207797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.208086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.208152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.208412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.208486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.208731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.208797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.208989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.209054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.209348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.209414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.209707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.209771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.210033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.210097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.210352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.210420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.210679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.210744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.211046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.211113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.211330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.211398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.211685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.211749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.212004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.212068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.212292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.212359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.212647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.212711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.212953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.213018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.213301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.213367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.213616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.213679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.213959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.214023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.214277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.214344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.214615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.214679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.214970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.215034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.215289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.215355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.215571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.215634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.215879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.215943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.216232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.216320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.216582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.216646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.216843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.216907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.217093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.217158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.217429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.217495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.217758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.217823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.218122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.218185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.218414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.218479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.218774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.218848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.219141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.219205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.757 [2024-10-30 12:38:14.219525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.757 [2024-10-30 12:38:14.219622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.757 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.219889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.219958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.220254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.220337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.220646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.220710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.220983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.221051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.221334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.221401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.221592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.221658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.221944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.222009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.222248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.222327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.222613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.222677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.222924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.222988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.223244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.223326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.223556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.223621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.223915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.223978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.224198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.224281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.224570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.224635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.224886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.224952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.225200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.225286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.225539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.225604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.225906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.225970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.226269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.226334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.226579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.226645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.226902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.226968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.227224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.227311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.227559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.227624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.227855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.227922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.228124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.228189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.228421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.228486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.228722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.228785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.229081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.229144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.229396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.229461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.229667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.229732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.229995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.230060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.230276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.230341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.230634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.230700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.230919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.230984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.231286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.231352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.231618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.231682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.231874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.231952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.232244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.232332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.232625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.232688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.232944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.233009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.233302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.233379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.233617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.233682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.233949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.234014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.234227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.234306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.234523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.234587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.234846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.234910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.235154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.235217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.235484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.235547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.235800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.235865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.236152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.236215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.236514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.236580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.236840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.236905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.758 [2024-10-30 12:38:14.237153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.758 [2024-10-30 12:38:14.237217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.758 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.237553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.237618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.237884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.237948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.238156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.238221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.238482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.238549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.238856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.238920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.239175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.239248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.239519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.239583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.239825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.239889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.240141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.240204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.240469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.240537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.240845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.240910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.241160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.241226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.241536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.241600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.241819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.241884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.242075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.242141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.242405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.242471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.242718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.242783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.243070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.243134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.243436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.243502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.243801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.243865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.244151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.244216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.244497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.244562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.244820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.244884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.245138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.245214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.245530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.245595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.245890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.245953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.246200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.246282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.246488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.246549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.246807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.246869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.247067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.247129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.247420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.247487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.247796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.247860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.248111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.248176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.248440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.248504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.248701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.248767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.249055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.249120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.249419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.249483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.249697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.249764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.249976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.250044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.250301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.250366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.250576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.250639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.250940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.251004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.251300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.251364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.251659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.251723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.251936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.251999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.252236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.252316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.252607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.252671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.252957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.253023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.759 qpair failed and we were unable to recover it. 00:26:41.759 [2024-10-30 12:38:14.253324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.759 [2024-10-30 12:38:14.253388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.253604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.253669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.253967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.254032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.254233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.254312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.254605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.254669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.254919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.254983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.255278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.255344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.255621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.255685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.255886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.255951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.256194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.256274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.256568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.256631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.256930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.256994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.257247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.257324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.257490] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:26:41.760 [2024-10-30 12:38:14.257576] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.760 [2024-10-30 12:38:14.257610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.257672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.257928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.257992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.258244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.258319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.258526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.258589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.258817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.258881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.259132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.259196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.259396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.259432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.259591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.259637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.259810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.259844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.259961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.259996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.260144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.260179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.260343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.260377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.260483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.260516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.260659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.260694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.260863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.260896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.261049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.261083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.261224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.261275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.261417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.261452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.261635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.261667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.261833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.261866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.261977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.262012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.262158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.262192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.262344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.262377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.262489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.262523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.262669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.262703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.262843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.262877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.263023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.263056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.263195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.263228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.263413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.263445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.263567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.263599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.263770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.263802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.263910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.263944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.264116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.264148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.264297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.264330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.264464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.264495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.264608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.264640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.264780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.264812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.264947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.264978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.265086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.265117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.265262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.265294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.265404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.265436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.265600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.265637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.265775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.265806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.266045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.266109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.760 [2024-10-30 12:38:14.266353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.760 [2024-10-30 12:38:14.266385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.760 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.266491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.266523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.266635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.266666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.266802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.266835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.267044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.267108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.267344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.267377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.267520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.267572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.267849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.267914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.268212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.268313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.268419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.268452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.268561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.268593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.268752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.268787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.268933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.268966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.269087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.269119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.269365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.269397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.269528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.269585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.269792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.269824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.269959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.269991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.270131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.270162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.270366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.270398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.270504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.270537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.270676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.270713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.270825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.270857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.271069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.271120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.271352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.271385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.271497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.271528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.271651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.271684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.271836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.271871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.272080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.272132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.272358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.272392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.272495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.272526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.272640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.272673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.272779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.272811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.272972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.273027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.273278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.273340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.273455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.273488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.273617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.273648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.273779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.273816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.273990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.274046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.274246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.274328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.274418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.274445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.274534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.274559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.274677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.274718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.275003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.275065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.275329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.275355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.275442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.275467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.275551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.275581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.275696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.275721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.275838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.275863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.276036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.276119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.276242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.276280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.276383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.276410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.276494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.276520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.276630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.276676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.276807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.276851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.277021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.277073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.277186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.761 [2024-10-30 12:38:14.277211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.761 qpair failed and we were unable to recover it. 00:26:41.761 [2024-10-30 12:38:14.277304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.277331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.277445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.277471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.277554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.277580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.277671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.277696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.277780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.277808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.277920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.277945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.278031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.278057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.278154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.278181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.278269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.278295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.278385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.278411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.278522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.278547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.278663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.278689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.278807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.278833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.278956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.278982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.279063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.279088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.279172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.279198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.279305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.279332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.279477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.279501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.279594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.279619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.279739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.279764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.279874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.279901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.280020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.280045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.280166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.280194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.280296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.280323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.280438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.280464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.280624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.280686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.280801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.280856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.281001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.281027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.281105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.281130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.281211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.281236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.281354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.281380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.281460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.281486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.281601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.281627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.281708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.281735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.281855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.281882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.282003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.282028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.282124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.282149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.282276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.282302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.282388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.282413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.282531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.282565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.282708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.282743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.282946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.282977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.283118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.283149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.283287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.283313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.283397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.283422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.283543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.283571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.283755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.283818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.284112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.284189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.284402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.284427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.284561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.284618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.762 qpair failed and we were unable to recover it. 00:26:41.762 [2024-10-30 12:38:14.284831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.762 [2024-10-30 12:38:14.284884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.285059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.285118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.285231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.285263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.285402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.285456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.285613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.285667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.285758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.285784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.285898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.285924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.286028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.286053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.286149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.286187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.286310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.286338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.286476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.286501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.286596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.286622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.286707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.286732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.286849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.286874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.287013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.287044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.287181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.287214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.287416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.287442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.287652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.287685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.287819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.287851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.288044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.288102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.288272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.288297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.288386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.288411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.288494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.288518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.288676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.288744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.288874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.288929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.289061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.289107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.289238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.289319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.289498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.289549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.289711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.289766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.289905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.289931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.290040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.290065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.290157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.290183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.290286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.290312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.290397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.290423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.290557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.290603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.290731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.290757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.290836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.290861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.290974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.290999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.291119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.291158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.291270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.291299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.291412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.291439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.291586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.291649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.291919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.291977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.292244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.292319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.292485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.292545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.292773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.292805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.292920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.292952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.293107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.293134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.293223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.293248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.293383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.293409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.293551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.293599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.293740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.293790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.293885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.293910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.294030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.294056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.294143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.294168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.294273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.294299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.294382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.294407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.294484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.294510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.294625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.294650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.294765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.294790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.763 [2024-10-30 12:38:14.294939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.763 [2024-10-30 12:38:14.294965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.763 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.295083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.295109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.295194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.295220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.295345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.295371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.295459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.295484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.295573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.295598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.295691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.295716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.295796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.295822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.295905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.295931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.296071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.296097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.296211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.296236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.296359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.296385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.296475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.296501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.296618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.296644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.296764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.296793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.296882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.296908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.297027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.297052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.297193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.297219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.297361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.297393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.297478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.297503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.297648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.297695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.297831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.297856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.297949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.297975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.298094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.298119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.298233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.298263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.298358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.298382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.298463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.298487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.298610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.298635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.298744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.298769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.298879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.298904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.298984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.299009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.299095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.299119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.299208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.299232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.299343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.299368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.299457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.299482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.299591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.299616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.299696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.299721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.299803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.299827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.299978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.300006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.300096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.300121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.300216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.300242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.300332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.300357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.300441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.300466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.300585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.300611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.300696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.300721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.300804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.300829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.300919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.300944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.301054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.301079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.301161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.301187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.301295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.301322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.301413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.301438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.301554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.301580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.301667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.301693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.301782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.301811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.301928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.301952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.302061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.302086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.302204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.302229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.302353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.302379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.302518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.302545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.302645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.302671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.302758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.302783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.302898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.302923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.303040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.303065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.764 [2024-10-30 12:38:14.303207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.764 [2024-10-30 12:38:14.303233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.764 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.303347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.303388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.303485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.303514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.303630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.303657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.303741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.303768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.303881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.303907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.304004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.304030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.304118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.304144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.304291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.304320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.304455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.304495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.304595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.304622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.304700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.304726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.304840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.304867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.304957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.304983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.305096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.305123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.305238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.305272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.305355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.305380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.305495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.305521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.305663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.305687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.305769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.305794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.305872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.305897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.306003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.306029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.306111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.306140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.306220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.306246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.306332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.306358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.306472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.306501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.306612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.306651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.306765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.306792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.306916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.306942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.307048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.307074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.307232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.307288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.307378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.307405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.307499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.307524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.307610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.307637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.307748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.307772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.307858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.307883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.308003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.308028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.308140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.308166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.308264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.308292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.308396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.308424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.308512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.308538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.308653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.308679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.308793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.308819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.308910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.308940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.309050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.309078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.309173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.309202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.309351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.309379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.309505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.309531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.309649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.309675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.309759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.309800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.309881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.309907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.310021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.310047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.310174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.310201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.310324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.310354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.310455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.310482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.765 qpair failed and we were unable to recover it. 00:26:41.765 [2024-10-30 12:38:14.310603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.765 [2024-10-30 12:38:14.310629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.310757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.310782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.310896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.310921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.311024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.311051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.311180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.311220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.311321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.311350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.311438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.311463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.311562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.311589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.311675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.311701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.311818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.311846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.311961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.311989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.312076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.312102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.312191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.312218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.312346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.312374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.312468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.312493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.312586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.312613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.312700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.312725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.312813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.312840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.312946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.312971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.313047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.313072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.313213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.313238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.313365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.313392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.313478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.313506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.313583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.313610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.313703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.313730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.313819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.313846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.313988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.314015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.314125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.314150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.314240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.314285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.314381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.314411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.314528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.314555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.314668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.314694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.314776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.314808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.314893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.314919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.315066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.315098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.315220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.315272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.315374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.315402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.315497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.315525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.315648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.315674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.315789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.315815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.315906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.315932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.316013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.316039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.316170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.316209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.316306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.316334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.316448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.316473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.316569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.316596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.316735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.316760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.316843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.316872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.316995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.317023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.317142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.317172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.317267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.317294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.317407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.317434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.317507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.317533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.317641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.317667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.317753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.317781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.317903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.317930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.318015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.318043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.318124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.318159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.318277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.318303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.318419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.318444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.318561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.318587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.318701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.318742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.318828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.766 [2024-10-30 12:38:14.318854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.766 qpair failed and we were unable to recover it. 00:26:41.766 [2024-10-30 12:38:14.319005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.319032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.319132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.319171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.319287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.319316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.319405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.319432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.319545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.319571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.319678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.319704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.319793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.319819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.319935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.319960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.320059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.320093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.320209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.320237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.320374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.320400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.320511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.320537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.320655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.320689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.320775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.320800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.320878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.320906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.321006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.321035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.321159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.321198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.321325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.321352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.321437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.321463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.321546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.321572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.321656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.321682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.321763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.321788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.321910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.321936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.322039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.322064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.322146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.322171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.322277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.322306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.322428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.322458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.322547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.322573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.322662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.322698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.322784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.322810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.322905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.322934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.323048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.323075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.323220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.323248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.323334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.323360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.323450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.323482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.323568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.323594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.323734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.323761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.323850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.323876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.323954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.323987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.324082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.324108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.324194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.324222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.324313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.324341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.324423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.324449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.324560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.324586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.324666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.324693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.324773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.324799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.324883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.324911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.325023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.325049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.325140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.325171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.325263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.325300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.325405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.325445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.325594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.325622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.325739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.325773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.325887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.325914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.325999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.326026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.326136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.326162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.326278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.326304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.326450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.326476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.326566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.326592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.767 [2024-10-30 12:38:14.326687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.767 [2024-10-30 12:38:14.326713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.767 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.326829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.326858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.327000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.327026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.327108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.327134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.327268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.327294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.327378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.327404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.327507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.327546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.327626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.327654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.327747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.327784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.327899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.327925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.328021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.328048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.328162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.328189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.328316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.328342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.328452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.328478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.328674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.328701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.328810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.328835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.328948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.328975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.329082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.329110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.329262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.329288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.329380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.329410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.329508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.329536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.329656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.329683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.329774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.329800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.329886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.329913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.330072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.330099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.330190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.330217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.330317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.330345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.330431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.330457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.330539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.330567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.330679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.330707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.330794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.330819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.330927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.330953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.331093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.331119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.331218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.331263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.331385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.331412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.331555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.331583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.331664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.331691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.331773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.331801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.331890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.331917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.332027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.332054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.332166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.332192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.332317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.332344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.332431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.332457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.332571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.332598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.332708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.332735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.332839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.332865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.332982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.333015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.333109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.333149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.333270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.333299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.333397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.333422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.333522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.333548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.333670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.333695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.333781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.333807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.333890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.333917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.334036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.334066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.334157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.334189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.334304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.768 [2024-10-30 12:38:14.334330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.768 qpair failed and we were unable to recover it. 00:26:41.768 [2024-10-30 12:38:14.334421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.334447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.334535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.334564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.334647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.334674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.334768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.334796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.334936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.334963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.335049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.335080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.335159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.335186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.335273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.335299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.335403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.335429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.335542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.335568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.335657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.335683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.335769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.335796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.335924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.335951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.336072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.336099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.336213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.336240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.336332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.336358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.336453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.336480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.336564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.336591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.336708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.336734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.336843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.336869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.336949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.336974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.337085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.337115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.337215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.337263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.337350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.337377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.337493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.337519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.337609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.337636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.337750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.337775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.337884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.337912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.338026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.338053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.338140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.338173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.338283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.338309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.338391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.338416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.338504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.338529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.338576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.769 [2024-10-30 12:38:14.338617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.338644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.338762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.338790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.338912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.338938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.339024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.339059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.339171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.339197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.339281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.339311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.339406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.339432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.339524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.339551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.339671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.339698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.339815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.339847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.339930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.339957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.340072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.340101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.340191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.340218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.340338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.340367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.340488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.340515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.340642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.340671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.340757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.340784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.340892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.340918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.341008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.341041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.341154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.341181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.341296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.341324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.341420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.341449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.341552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.341592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.341692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.341720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.341814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.341841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.341990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.342016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.342101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.342127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.769 qpair failed and we were unable to recover it. 00:26:41.769 [2024-10-30 12:38:14.342216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.769 [2024-10-30 12:38:14.342247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.342398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.342425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.342515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.342543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.342631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.342662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.342749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.342778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.342901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.342930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.343056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.343083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.343167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.343193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.343299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.343330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.343447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.343480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.343570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.343599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.343745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.343772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.343852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.343889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.344008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.344036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.344129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.344156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.344242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.344278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.344392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.344418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.344511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.344540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.344636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.344664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.344780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.344810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.344898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.344927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.345026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.345065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.345150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.345178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.345291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.345319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.345411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.345438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.345520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.345546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.345663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.345689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.345782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.345811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.345967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.345996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.346094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.346123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.346233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.346265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.346355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.346385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.346467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.346494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.346582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.346610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.346726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.346754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.346910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.346938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.347062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.347090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.347203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.347228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.347333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.347360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.347485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.347511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.347629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.347654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.347746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.347772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.347856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.347884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.347980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.348009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.348095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.348122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.348240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.348279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.348365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.348392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.348480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.348506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.348648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.348674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.348790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.348821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.348947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.348987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.349116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.349145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.349230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.349262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.349351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.349377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.349473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.349499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.349606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.349633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.770 qpair failed and we were unable to recover it. 00:26:41.770 [2024-10-30 12:38:14.349751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.770 [2024-10-30 12:38:14.349778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.349869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.349899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.349994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.350022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.350138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.350166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.350278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.350306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.350424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.350452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.350528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.350555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.350689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.350716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.350904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.350932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.351040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.351067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.351185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.351217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.351329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.351356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.351472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.351500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.351589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.351616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.351729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.351757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.351839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.351867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.352013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.352045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.352165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.352206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.352320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.352361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.352500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.352528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.352649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.352678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.352786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.352813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.352906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.352934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.353049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.353076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.353196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.353223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.353333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.353361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.353453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.353480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.353596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.353623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.353754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.353781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.353906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.353934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.354025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.354052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.354146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.354183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.354272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.354301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.354405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.354437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.354547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.354574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.354659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.354687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.354772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.354807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.354907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.354934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.355059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.355087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.355180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.355207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.355357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.355385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.355479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.355506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.355616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.355644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.355738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.355766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.355861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.355896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.355990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.356017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.356121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.356161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.356279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.356309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.356442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.356469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.356586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.356612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.356703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.356730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.356846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.356872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.356964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.357005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.357125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.357166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.357309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.357342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.357462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.357489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.357604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.357631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.357763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.357791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.357908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.357936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.358051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.358077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.358198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.358234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.358374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.358404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.358521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.771 [2024-10-30 12:38:14.358548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.771 qpair failed and we were unable to recover it. 00:26:41.771 [2024-10-30 12:38:14.358651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.358678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.358791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.358817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.358937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.358963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.359083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.359110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.359216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.359246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.359388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.359423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.359516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.359545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.359662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.359689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.359820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.359847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.359966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.359992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.360112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.360137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.360261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.360289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.360412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.360438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.360528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.360564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.360683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.360710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.360803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.360829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.360950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.360988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.361077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.361105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.361187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.361213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.361322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.361349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.361480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.361522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.361609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.361638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.361773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.361801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.361913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.361939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.362033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.362064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.362166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.362192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.362312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.362339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.362426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.362453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.362578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.362607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.362760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.362787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.362874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.362901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.363007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.363034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.363152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.363180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.363305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.363333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.363421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.363449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.363565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.363591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.363671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.363698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.363819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.363847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.364010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.364039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.364163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.364189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.364291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.364318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.364463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.364490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.364587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.364612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.364724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.364752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.364874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.364900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.365028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.365078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.365180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.365211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.365329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.365357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.365448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.365474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.365564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.365590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.365689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.365715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.365806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.365834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.365931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.365971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.366095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.366125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.366251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.366286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.366414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.366441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.366532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.366559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.366699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.366727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.366849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.366876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.366964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.366992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.367129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.367169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.367284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.772 [2024-10-30 12:38:14.367313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.772 qpair failed and we were unable to recover it. 00:26:41.772 [2024-10-30 12:38:14.367425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.367453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.367539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.367570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.367665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.367706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.367833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.367867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.367991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.368020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.368112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.368138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.368221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.368279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.368402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.368429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.368516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.368543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.368658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.368685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.368829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.368860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.368948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.368975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.369067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.369096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.369217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.369262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.369351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.369378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.369462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.369487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.369623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.369651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.369743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.369781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.369897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.369926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.370031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.370058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.370144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.370172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.370278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.370305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.370410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.370438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.370569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.370596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.370698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.370725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.370812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.370839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.370971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.371011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.371103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.371136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.371238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.371277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.371365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.371396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.371514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.371539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.371654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.371684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.371781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.371817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.371910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.371940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.372059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.372096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.372199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.372228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.372321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.372347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.372461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.372487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.372575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.372601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.372688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.372720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.372818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.372853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.372947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.372986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.373071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.373100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.373217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.373244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.373345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.373373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.373460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.373487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.373648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.373675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.373801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.373828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.373910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.373942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.374053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.374080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.374877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.374911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.375063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.375103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.375233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.375272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.375371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.773 [2024-10-30 12:38:14.375398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.773 qpair failed and we were unable to recover it. 00:26:41.773 [2024-10-30 12:38:14.375542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.375568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.375650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.375683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.375820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.375849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.375981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.376008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.376116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.376143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.376239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.376285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.376383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.376410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.376502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.376528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.376622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.376649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.376745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.376771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.376868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.376895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.376983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.377010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.377104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.377147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.377238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.377271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.377369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.377397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.377473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.377505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.377592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.377617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.377742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.377768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.377918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.377946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.378049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.378089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.378227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.378270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.378364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.378390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.378485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.378510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.378599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.378625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.378769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.378796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.378875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.378902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.378999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.379027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.379124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.379149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.379265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.379292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.379425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.379450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.379610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.379650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.379747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.379774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.379867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.379895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.380019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.380045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.380166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.380194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.380331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.380360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.380473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.380500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.380592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.380629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.380748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.380774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.380885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.380912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.381027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.381054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.381137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.381163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.381277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.381305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.381394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.381421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.381537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.381574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.381695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.381722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.381820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.381846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.381933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.381959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.382054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.382080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.382211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.382240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.382374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.382409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.382542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.382585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.382720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.382749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.382829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.382856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.382944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.382982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.383121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.383153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.383267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.383296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.383412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.383439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.383526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.383554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.383645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.774 [2024-10-30 12:38:14.383672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.774 qpair failed and we were unable to recover it. 00:26:41.774 [2024-10-30 12:38:14.383789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.383817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.383907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.383934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.384043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.384070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.384160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.384187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.384269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.384307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.384390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.384417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.384500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.384527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.384675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.384702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.384794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.384820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.384909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.384937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.385032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.385061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.385141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.385167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.385288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.385314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.385408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.385433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.385524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.385562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.385691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.385716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.385836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.385862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.386005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.386030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.386112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.386137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.386240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.386285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.386376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.386401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.387130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.387161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.387251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.387292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.387409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.387435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.387526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.387552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.387694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.387719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.387872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.387898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.387990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.388026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.388128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.388163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.388271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.388300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.388418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.388447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.388574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.388601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.388684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.388711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.388803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.388829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.388914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.388940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.389085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.389111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.389227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.389285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.389375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.389402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.389492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.389519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.389651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.389680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.389771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.389797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.389920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.389948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.390052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.390090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.390190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.390220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.390342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.390368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.390451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.390479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.390600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.390631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.390770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.390804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.390890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.390919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.391044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.391074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.391194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.391220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.391334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.391362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.391448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.391475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.391598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.391626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.391742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.391768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.391857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.391889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.392005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.392031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.392177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.392203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.392303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.392329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.392413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.392439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.392526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.392558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.392647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.392673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.775 [2024-10-30 12:38:14.392752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.775 [2024-10-30 12:38:14.392778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.775 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.392898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.392925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.393006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.393032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.393146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.393173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.393300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.393326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.393421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.393448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.393594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.393621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.393739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.393765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.393882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.393910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.393993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.394020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.394165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.394191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.394304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.394333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.394445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.394473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.394552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.394588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.394710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.394737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.394822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.394856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.394939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.394966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.395058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.395093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.395219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.395269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.395394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.395421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.395511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.395539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.395663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.395691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.395786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.395827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.395950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.395983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.396084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.396112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.396199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.396226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.396331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.396358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.396440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.396472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.396577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.396616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.396735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.396762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.396867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.396898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.396990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.397018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.397135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.397162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.397323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.397351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.397436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.397465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.397562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.397589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.397717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.397754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.397873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.397900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.398015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.398041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.398131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.398158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.398254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.398296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.398399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.398430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.398550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.398585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.398704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.398732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.398872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.398899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.398985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.399013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.399135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.399168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.399308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.399337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.399425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.399452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.399592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.399631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.399721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.399748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.399834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.399862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.399955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.399985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.400082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.400120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.400238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.400280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.400369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.400396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.400485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.776 [2024-10-30 12:38:14.400511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.776 qpair failed and we were unable to recover it. 00:26:41.776 [2024-10-30 12:38:14.400647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.400673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.400788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.400823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.400951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.400978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.401092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.401119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.401206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.401235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.401342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.401370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.401454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.401481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.401610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.401641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.401763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.401790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.401889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.401916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.402037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.402064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.402153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.402180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.402301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.402328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.402438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.402465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.402549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.402585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.402664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.402690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.402796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.402822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.402909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.402935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.403063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.403090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.403173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.403200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.403355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.403384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.403480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.403510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.403669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.403696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.403784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.403812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.403898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.403926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.404021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.404053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.404170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.404197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.404334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.404361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.404454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.404480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.404577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.404614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.404704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.404730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.404811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.404837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.404924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.404953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.405044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.405070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.405161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.405188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.405287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.405315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.405394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.405421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.405617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.405647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.405737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.405765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.405859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.405888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.405972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.406000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.406113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.406141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.406267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.406275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.777 [2024-10-30 12:38:14.406295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.777 [2024-10-30 12:38:14.406308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of eventsqpair failed and we were unable to recover it. 00:26:41.777 at runtime. 00:26:41.777 [2024-10-30 12:38:14.406325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.777 [2024-10-30 12:38:14.406338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.777 [2024-10-30 12:38:14.406348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.777 [2024-10-30 12:38:14.406382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.406410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.406507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.406532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.406663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.406702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.406802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.406829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.406910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.406938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.407080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.407119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.407241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.407290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.407387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.407416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.407496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.407524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.407614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.407641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.407757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.407785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.407893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.407921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.408040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.408067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.408093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:41.777 [2024-10-30 12:38:14.408186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.408148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:41.777 [2024-10-30 12:38:14.408213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.408121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:41.777 [2024-10-30 12:38:14.408151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:41.777 [2024-10-30 12:38:14.408329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.777 [2024-10-30 12:38:14.408356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.777 qpair failed and we were unable to recover it. 00:26:41.777 [2024-10-30 12:38:14.408475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.408503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.408631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.408659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.408744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.408772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.408882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.408919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.409054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.409094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.409194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.409223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.409328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.409355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.409442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.409477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.409580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.409607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.409727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.409753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.409862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.409890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.410010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.410039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.410132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.410161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.410279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.410318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.410434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.410461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.410545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.410571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.410668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.410699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.410812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.410840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.410948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.410975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.411063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.411100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.411189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.411216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.411316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.411344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.411436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.411463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.411543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.411571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.411701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.411728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.411819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.411846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.412044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.412082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.412172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.412200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.412304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.412333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.412413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.412440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.412552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.412602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.412696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.412736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.412854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.412881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.412957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.412983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.413092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.413118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.413208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.413237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.413343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.413370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.413458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.413487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.413604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.413632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.413706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.413744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.413833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.413860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.413946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.413985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.414114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.414155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.414250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.414299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.414382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.414409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.414526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.414564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.414658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.414685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.414786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.414815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.414939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.414967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.415049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.415076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.415190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.415217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.415322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.415351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.415441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.415468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.415548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.415582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.415673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.415702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.415782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.415808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.415899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.415926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.416044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.416071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.778 [2024-10-30 12:38:14.416154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.778 [2024-10-30 12:38:14.416180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.778 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.416275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.416304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.416394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.416421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.416517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.416551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.416678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.416704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.416808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.416836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.416965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.417005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.417106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.417135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.417214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.417242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.417351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.417378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.417461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.417488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.417569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.417596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.417704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.417743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.417844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.417871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.417961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.417990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.418078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.418104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.418191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.418221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.418326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.418355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.418444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.418473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.418567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.418595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.418679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.418707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.418786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.418813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:41.779 [2024-10-30 12:38:14.419007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.779 [2024-10-30 12:38:14.419034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:41.779 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.419226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.419253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.419356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.419384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.419503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.419536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.419633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.419662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.419785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.419814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.419906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.419934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.420025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.420051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.420132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.420159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.420248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.420282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.420376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.420402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.420519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.420546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.045 [2024-10-30 12:38:14.420657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.045 [2024-10-30 12:38:14.420684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.045 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.420766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.420792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.420869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.420896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.420984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.421012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.421105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.421131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.421216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.421243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.421360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.421387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.421506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.421532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.421734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.421763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.421847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.421874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.421961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.421988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.422075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.422102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.422297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.422325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.422420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.422447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.422560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.422587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.422673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.422708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.422833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.422860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.422946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.422975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.423066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.423097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.423210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.423236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.423363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.423390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.423502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.423543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.423652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.423692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.423783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.423811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.423898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.423925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.424012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.424040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.424122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.424150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.424226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.424253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.424363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.424390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.424476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.424503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.424602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.424629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.424709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.424736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.424821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.424858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.424990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.425018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.425099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.425125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.425275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.425303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.425391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.046 [2024-10-30 12:38:14.425420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.046 qpair failed and we were unable to recover it. 00:26:42.046 [2024-10-30 12:38:14.425510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.425538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.425679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.425721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.425813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.425842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.425954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.425981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.426061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.426088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.426176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.426205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.426316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.426344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.426458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.426484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.426607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.426635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.426767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.426794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.426918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.426944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.427072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.427098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.427184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.427210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.427308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.427337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.427419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.427446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.427570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.427597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.427730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.427761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.427880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.427907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.428035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.428086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.428172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.428202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.428347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.428376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.428466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.428500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.428619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.428653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.428743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.428771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.428865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.428901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.428986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.429012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.429121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.429148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.429235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.429284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.429404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.429431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.429531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.429579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.429673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.047 [2024-10-30 12:38:14.429701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.047 qpair failed and we were unable to recover it. 00:26:42.047 [2024-10-30 12:38:14.429828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.429855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.429934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.429961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.430043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.430072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.430196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.430225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.430366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.430395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.430512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.430540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.430674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.430701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.430830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.430859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.430945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.430972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.431066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.431095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.431186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.431214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.431324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.431352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.431463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.431491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.431602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.431630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.431724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.431751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.431843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.431870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.431964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.432005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.432109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.432149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.432281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.432308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.432397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.432424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.432538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.432576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.432687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.432720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.432833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.432858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.432957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.432986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.433080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.433109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.433198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.433226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.433339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.433378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.433497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.433526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.433640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.433668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.433751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.433777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.433866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.433893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.434016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.434043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.434133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.434161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.434291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.434321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.434425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.434466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.434564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.434593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.434703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.434732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.048 qpair failed and we were unable to recover it. 00:26:42.048 [2024-10-30 12:38:14.434851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.048 [2024-10-30 12:38:14.434879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.434989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.435016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.435136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.435164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.435275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.435304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.435395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.435421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.435507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.435534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.435624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.435650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.435743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.435770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.435861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.435887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.435966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.435992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.436088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.436125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.436210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.436237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.436350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.436376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.436464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.436491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.436608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.436633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.436722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.436747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.436855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.436882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.436968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.436997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.437111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.437140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.437228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.437271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.437355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.437388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.437583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.437610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.437692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.437719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.437803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.437830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.437957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.437984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.438097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.438124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.438210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.438238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.438367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.438394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.438487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.438513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.438597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.438625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.438718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.438748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.438832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.438859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.049 [2024-10-30 12:38:14.438974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.049 [2024-10-30 12:38:14.439002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.049 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.439090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.439118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.439238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.439280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.439381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.439409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.439506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.439534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.439651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.439703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.439803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.439832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.439979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.440005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.440090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.440117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.440232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.440272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.440359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.440386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.440504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.440531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.440666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.440699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.440786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.440814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.440930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.440958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.441073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.441114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.441235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.441272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.441354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.441382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.441468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.441495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.441587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.441614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.441705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.441731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.441834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.441861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.441948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.441975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.442051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.442077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.442172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.442201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.442318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.442358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.442456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.442484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.442608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.442647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.442724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.442750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.442843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.442869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.442967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.442999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.443092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.443119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.050 [2024-10-30 12:38:14.443218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.050 [2024-10-30 12:38:14.443274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.050 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.443403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.443432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.443518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.443546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.443694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.443733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.443855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.443882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.443966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.443993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.444090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.444139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.444270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.444300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.444425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.444452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.444535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.444562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.444651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.444680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.444772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.444801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.444893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.444921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.445041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.445069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.445153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.445180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.445305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.445333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.445447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.445474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.445584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.445611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.445750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.445776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.445862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.445890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.445974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.446002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.446091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.446121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.446209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.446238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.446339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.446371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.446454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.446481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.446574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.446601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.446694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.446721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.446835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.446863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.446948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.446974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.447090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.447118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.447228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.447269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.447357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.447385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.447470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.447497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.447607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.447638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.051 [2024-10-30 12:38:14.447748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.051 [2024-10-30 12:38:14.447775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.051 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.447858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.447886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.447968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.448007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.448104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.448131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.448218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.448265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.448364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.448391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.448513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.448553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.448632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.448659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.448744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.448771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.448860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.448887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.449001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.449028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.449110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.449137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.449228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.449277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.449402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.449429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.449553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.449583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.449677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.449705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.449803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.449832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.449914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.449942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.450056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.450085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.450177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.450205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.450308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.450334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.450425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.450450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.450532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.450559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.450685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.450711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.450802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.450829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.450935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.450962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.451046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.451072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.451190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.451225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.451366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.451394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.451512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.451545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.451697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.451724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.451806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.451833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.451917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.451945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.452039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.052 [2024-10-30 12:38:14.452077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.052 qpair failed and we were unable to recover it. 00:26:42.052 [2024-10-30 12:38:14.452207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.452233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.452344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.452370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.452452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.452479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.452564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.452601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.452706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.452746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b64fa0 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.452834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.452862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.452949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.452979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.453075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.453113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.453221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.453265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.453351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.453379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.453474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.453502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.453582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.453610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.453721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.453748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.453841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.453868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.453957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.453984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.454065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.454092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.454171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.454199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.454329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.454357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.454440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.454466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.454548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.454584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.454710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.454736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.454814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.454841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.454964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.454993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.455071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.455098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.455214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.455251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.455350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.455377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.455472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.455498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.455581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.455617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.455704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.455742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.455821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.455848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.455932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.455959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.456075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.456116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.456233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.456281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.456372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.456399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.456521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.456548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.456652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.456679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.053 [2024-10-30 12:38:14.456803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.053 [2024-10-30 12:38:14.456829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.053 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.456908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.456935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.457060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.457088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.457217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.457244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.457364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.457391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.457482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.457510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.457601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.457628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.457737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.457764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.457854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.457881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.457999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.458025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.458107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.458134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.458244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.458290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.458412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.458439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.458560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.458587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.458675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.458703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.458803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.458831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.458940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.458967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.459049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.459076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.459174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.459201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.459306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.459335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.459417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.459444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.459532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.459559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.459658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.459685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.459774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.459801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.459884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.459911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.460027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.460053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.460142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.460174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.460269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.460298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.460377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.460404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.460516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.460544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.460632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.460659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.460769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.460796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.460888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.460916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.460996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.461023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.461116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.461144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.461230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.461275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.461397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.461424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.461501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.461528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.461676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.461703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.461816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.054 [2024-10-30 12:38:14.461843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.054 qpair failed and we were unable to recover it. 00:26:42.054 [2024-10-30 12:38:14.461931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.461958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.462068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.462095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.462180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.462207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.462338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.462368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.462451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.462478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.462599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.462626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.462720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.462748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.462861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.462888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.462967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.462994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.463077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.463103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.463182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.463209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.463335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.463364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.463444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.463471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.463587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.463614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.463690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.463718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.463795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.463822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.463939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.463966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.464162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.464189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.464281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.464309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.464393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.464420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.464612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.464650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.464740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.464767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.464852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.464879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.464994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.465022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.465100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.465127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.465218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.465262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.465341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.465373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.465458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.465485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.465601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.465628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.465819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.465846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.465933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.465961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.466046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.466073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.466183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.466210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.466343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.466370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.466448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.055 [2024-10-30 12:38:14.466476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.055 qpair failed and we were unable to recover it. 00:26:42.055 [2024-10-30 12:38:14.466559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.466588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.466678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.466705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.466895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.466922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.467010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.467037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.467124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.467151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.467239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.467277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.467384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.467411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.467530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.467557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.467652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.467681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.467803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.467830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.467927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.467953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.468034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.468060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.468136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.468164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.468267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.468295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.468375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.468401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.468495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.468521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.468616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.468650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.468733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.468759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.468848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.468874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.468961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.468987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.469085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.469112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.469193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.469220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.469312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.469341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.469427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.469454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.469543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.469570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.469696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.469723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.469818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.469845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.469923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.469949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.470040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.470067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.470147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.470173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.470279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.470306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.470419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.470450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.470576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.470605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.470801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.470828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.470945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.470972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.471081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.471109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.471188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.471215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.471308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.471336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.471448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.471475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.471572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.471599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.056 qpair failed and we were unable to recover it. 00:26:42.056 [2024-10-30 12:38:14.471697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.056 [2024-10-30 12:38:14.471725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.471838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.471866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.471941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.471968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.472052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.472080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.472173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.472201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.472303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.472330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.472410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.472437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.472526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.472563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.472654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.472680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.472772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.472798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.472919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.472946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.473029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.473055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.473170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.473196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.473293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.473321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.473408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.473434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.473514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.473541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.473645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.473671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.473766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.473794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.473889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.473917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.474006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.474033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.474150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.474177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.474269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.474297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.474383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.474411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.474499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.474526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.474617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.474644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.474763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.474789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.474868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.474895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.475010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.475037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.475130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.475157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.475236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.475277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.475378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.475404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.475485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.475516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.475616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.475642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.475759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.475785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.475862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.475888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.475971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.475999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.476088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.476114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.476207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.476235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.476322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.476349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.476429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.476455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.057 [2024-10-30 12:38:14.476537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.057 [2024-10-30 12:38:14.476573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.057 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.476658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.476685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.476768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.476796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.476883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.476909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.476994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.477021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.477107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.477133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.477240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.477284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.477377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.477404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.477495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.477522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.477619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.477645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.477735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.477762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.477878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.477906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.477985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.478011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.478137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.478163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.478262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.478289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.478375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.478402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.478479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.478504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.478588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.478616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.478703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.478731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.478823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.478851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.478986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.479028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.479164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.479205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.479322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.479350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.479440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.479466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.479546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.479577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.479718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.479744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.479845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.479872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.479956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.479982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.480076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.480102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.480188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.480214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.480342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.480368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.480488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.480520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.480612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.480639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.480718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.480744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.480839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.480866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.480954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.480980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.481068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.481097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.481181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.481208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.481341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.481369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.481459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.481485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.058 [2024-10-30 12:38:14.481576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.058 [2024-10-30 12:38:14.481603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.058 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.481686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.481712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.481798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.481824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.481933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.481959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.482038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.482066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.482182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.482209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.482320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.482347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.482425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.482451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.482603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.482630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.482718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.482744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.482824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.482850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.482939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.482965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.483078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.483103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.483231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.483288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.483412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.483441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.483525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.483557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.483672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.483699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.483825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.483851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.483947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.483974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.484055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.484082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.484170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.484199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.484326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.484354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.484445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.484471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.484575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.484603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.484699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.484726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.484813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.484842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.484937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.484962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.485060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.485087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.485229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.485297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.485381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.485407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.485547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.485577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.485690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.485723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.485802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.485829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.485905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.485931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.486042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.486068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.486156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.486182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.059 [2024-10-30 12:38:14.486276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.059 [2024-10-30 12:38:14.486303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.059 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.486384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.486410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.486503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.486529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.486679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.486706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.486788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.486814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.486928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.486955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.487040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.487065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.487172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.487198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.487300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.487326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.487416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.487443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.487532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.487565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.487654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.487680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.487770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.487795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.487872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.487899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.487987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.488014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.488126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.488153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.488274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.488301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.488384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.488411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.488517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.488544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.488670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.488697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.488781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.488808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.488926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.488955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.489046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.489073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.489161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.489188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.489300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.489328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.489410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.489437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.489568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.489595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.489667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.489694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.489781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.489807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.489893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.489920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.490003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.490030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.490137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.490163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.490239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.490277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.490420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.490448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.490564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.490591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.490666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.490696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.490785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.490811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.490927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.490953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.491036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.491063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.491145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.491171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.491252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.060 [2024-10-30 12:38:14.491285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.060 qpair failed and we were unable to recover it. 00:26:42.060 [2024-10-30 12:38:14.491374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.491401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.491515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.491541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.491626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.491654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.491734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.491761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.491850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.491879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.492002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.492028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.492117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.492144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.492252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.492288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.492407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.492433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.492515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.492542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.492622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.492648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.492734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.492760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.492838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.492863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.492947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.492973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.493112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.493139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd904000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.493269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.493311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8f8000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.493408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.493438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.493524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.493551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.493664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.493691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.493777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.493803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 [2024-10-30 12:38:14.493893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.493920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd8fc000b90 with addr=10.0.0.2, port=4420 00:26:42.061 qpair failed and we were unable to recover it. 00:26:42.061 A controller has encountered a failure and is being reset. 00:26:42.061 [2024-10-30 12:38:14.494076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.061 [2024-10-30 12:38:14.494122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b72f30 with addr=10.0.0.2, port=4420 00:26:42.061 [2024-10-30 12:38:14.494143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b72f30 is same with the state(6) to be set 00:26:42.061 [2024-10-30 12:38:14.494169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b72f30 (9): Bad file descriptor 00:26:42.061 [2024-10-30 12:38:14.494187] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:42.061 [2024-10-30 12:38:14.494201] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:42.061 [2024-10-30 12:38:14.494216] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:42.061 Unable to reset the controller. 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.061 Malloc0 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.061 [2024-10-30 12:38:14.593833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.061 [2024-10-30 12:38:14.622118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.061 12:38:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 724937 00:26:42.991 Controller properly reset. 00:26:48.245 Initializing NVMe Controllers 00:26:48.245 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:48.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:48.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:48.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:48.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:48.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:48.245 Initialization complete. Launching workers. 00:26:48.245 Starting thread on core 1 00:26:48.245 Starting thread on core 2 00:26:48.245 Starting thread on core 3 00:26:48.245 Starting thread on core 0 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:48.245 00:26:48.245 real 0m10.661s 00:26:48.245 user 0m33.700s 00:26:48.245 sys 0m7.235s 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.245 ************************************ 00:26:48.245 END TEST nvmf_target_disconnect_tc2 00:26:48.245 ************************************ 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:48.245 rmmod nvme_tcp 00:26:48.245 rmmod nvme_fabrics 00:26:48.245 rmmod nvme_keyring 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 725354 ']' 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 725354 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 725354 ']' 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 725354 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 725354 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 725354' 00:26:48.245 killing process with pid 725354 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 725354 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 725354 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.245 12:38:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.148 12:38:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:50.148 00:26:50.148 real 0m15.701s 00:26:50.148 user 0m59.112s 00:26:50.148 sys 0m9.786s 00:26:50.148 12:38:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:50.148 12:38:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:50.148 ************************************ 00:26:50.148 END TEST nvmf_target_disconnect 00:26:50.148 ************************************ 00:26:50.148 12:38:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:50.148 00:26:50.148 real 5m6.803s 00:26:50.148 user 11m2.433s 00:26:50.148 sys 1m17.369s 00:26:50.148 12:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:50.148 12:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.148 ************************************ 00:26:50.148 END TEST nvmf_host 00:26:50.148 ************************************ 00:26:50.406 12:38:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:50.406 12:38:22 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:50.406 12:38:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:50.406 12:38:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:50.406 12:38:22 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:50.406 12:38:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:50.406 ************************************ 00:26:50.406 START TEST nvmf_target_core_interrupt_mode 00:26:50.406 ************************************ 00:26:50.406 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:50.406 * Looking for test storage... 00:26:50.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:50.406 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:50.406 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:26:50.406 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:50.406 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:50.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.406 --rc genhtml_branch_coverage=1 00:26:50.406 --rc genhtml_function_coverage=1 00:26:50.406 --rc genhtml_legend=1 00:26:50.406 --rc geninfo_all_blocks=1 00:26:50.406 --rc geninfo_unexecuted_blocks=1 00:26:50.406 00:26:50.406 ' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:50.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.407 --rc genhtml_branch_coverage=1 00:26:50.407 --rc genhtml_function_coverage=1 00:26:50.407 --rc genhtml_legend=1 00:26:50.407 --rc geninfo_all_blocks=1 00:26:50.407 --rc geninfo_unexecuted_blocks=1 00:26:50.407 00:26:50.407 ' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:50.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.407 --rc genhtml_branch_coverage=1 00:26:50.407 --rc genhtml_function_coverage=1 00:26:50.407 --rc genhtml_legend=1 00:26:50.407 --rc geninfo_all_blocks=1 00:26:50.407 --rc geninfo_unexecuted_blocks=1 00:26:50.407 00:26:50.407 ' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:50.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.407 --rc genhtml_branch_coverage=1 00:26:50.407 --rc genhtml_function_coverage=1 00:26:50.407 --rc genhtml_legend=1 00:26:50.407 --rc geninfo_all_blocks=1 00:26:50.407 --rc geninfo_unexecuted_blocks=1 00:26:50.407 00:26:50.407 ' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:50.407 ************************************ 00:26:50.407 START TEST nvmf_abort 00:26:50.407 ************************************ 00:26:50.407 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:50.668 * Looking for test storage... 00:26:50.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.668 --rc genhtml_branch_coverage=1 00:26:50.668 --rc genhtml_function_coverage=1 00:26:50.668 --rc genhtml_legend=1 00:26:50.668 --rc geninfo_all_blocks=1 00:26:50.668 --rc geninfo_unexecuted_blocks=1 00:26:50.668 00:26:50.668 ' 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.668 --rc genhtml_branch_coverage=1 00:26:50.668 --rc genhtml_function_coverage=1 00:26:50.668 --rc genhtml_legend=1 00:26:50.668 --rc geninfo_all_blocks=1 00:26:50.668 --rc geninfo_unexecuted_blocks=1 00:26:50.668 00:26:50.668 ' 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.668 --rc genhtml_branch_coverage=1 00:26:50.668 --rc genhtml_function_coverage=1 00:26:50.668 --rc genhtml_legend=1 00:26:50.668 --rc geninfo_all_blocks=1 00:26:50.668 --rc geninfo_unexecuted_blocks=1 00:26:50.668 00:26:50.668 ' 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.668 --rc genhtml_branch_coverage=1 00:26:50.668 --rc genhtml_function_coverage=1 00:26:50.668 --rc genhtml_legend=1 00:26:50.668 --rc geninfo_all_blocks=1 00:26:50.668 --rc geninfo_unexecuted_blocks=1 00:26:50.668 00:26:50.668 ' 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.668 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:50.669 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:52.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:52.575 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:52.575 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:52.575 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:52.575 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.575 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:52.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:26:52.833 00:26:52.833 --- 10.0.0.2 ping statistics --- 00:26:52.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.833 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:26:52.833 00:26:52.833 --- 10.0.0.1 ping statistics --- 00:26:52.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.833 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:52.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=728157 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 728157 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 728157 ']' 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:52.834 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:52.834 [2024-10-30 12:38:25.450212] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:52.834 [2024-10-30 12:38:25.451384] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:26:52.834 [2024-10-30 12:38:25.451439] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.092 [2024-10-30 12:38:25.523057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:53.092 [2024-10-30 12:38:25.576553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.092 [2024-10-30 12:38:25.576626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.092 [2024-10-30 12:38:25.576648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.092 [2024-10-30 12:38:25.576659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.092 [2024-10-30 12:38:25.576667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.092 [2024-10-30 12:38:25.578036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.092 [2024-10-30 12:38:25.578157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.092 [2024-10-30 12:38:25.578153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.092 [2024-10-30 12:38:25.662315] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:53.092 [2024-10-30 12:38:25.662509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:53.092 [2024-10-30 12:38:25.662532] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:53.092 [2024-10-30 12:38:25.662811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:53.092 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:53.092 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:26:53.092 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:53.092 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:53.092 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.093 [2024-10-30 12:38:25.718846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.093 Malloc0 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.093 Delay0 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.093 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.350 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.350 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:53.350 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.350 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.350 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.351 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:53.351 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.351 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.351 [2024-10-30 12:38:25.791015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.351 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.351 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:53.351 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.351 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:53.351 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.351 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:53.351 [2024-10-30 12:38:25.900166] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:55.879 Initializing NVMe Controllers 00:26:55.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:55.879 controller IO queue size 128 less than required 00:26:55.879 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:55.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:55.879 Initialization complete. Launching workers. 00:26:55.879 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28513 00:26:55.879 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28570, failed to submit 66 00:26:55.879 success 28513, unsuccessful 57, failed 0 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.879 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.879 rmmod nvme_tcp 00:26:55.879 rmmod nvme_fabrics 00:26:55.880 rmmod nvme_keyring 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 728157 ']' 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 728157 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 728157 ']' 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 728157 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 728157 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 728157' 00:26:55.880 killing process with pid 728157 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 728157 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 728157 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.880 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:57.780 00:26:57.780 real 0m7.293s 00:26:57.780 user 0m9.278s 00:26:57.780 sys 0m2.877s 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:57.780 ************************************ 00:26:57.780 END TEST nvmf_abort 00:26:57.780 ************************************ 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:57.780 ************************************ 00:26:57.780 START TEST nvmf_ns_hotplug_stress 00:26:57.780 ************************************ 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:57.780 * Looking for test storage... 00:26:57.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:26:57.780 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:58.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.041 --rc genhtml_branch_coverage=1 00:26:58.041 --rc genhtml_function_coverage=1 00:26:58.041 --rc genhtml_legend=1 00:26:58.041 --rc geninfo_all_blocks=1 00:26:58.041 --rc geninfo_unexecuted_blocks=1 00:26:58.041 00:26:58.041 ' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:58.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.041 --rc genhtml_branch_coverage=1 00:26:58.041 --rc genhtml_function_coverage=1 00:26:58.041 --rc genhtml_legend=1 00:26:58.041 --rc geninfo_all_blocks=1 00:26:58.041 --rc geninfo_unexecuted_blocks=1 00:26:58.041 00:26:58.041 ' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:58.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.041 --rc genhtml_branch_coverage=1 00:26:58.041 --rc genhtml_function_coverage=1 00:26:58.041 --rc genhtml_legend=1 00:26:58.041 --rc geninfo_all_blocks=1 00:26:58.041 --rc geninfo_unexecuted_blocks=1 00:26:58.041 00:26:58.041 ' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:58.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.041 --rc genhtml_branch_coverage=1 00:26:58.041 --rc genhtml_function_coverage=1 00:26:58.041 --rc genhtml_legend=1 00:26:58.041 --rc geninfo_all_blocks=1 00:26:58.041 --rc geninfo_unexecuted_blocks=1 00:26:58.041 00:26:58.041 ' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.041 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:00.568 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:00.568 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:00.568 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:00.568 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.568 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:00.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:27:00.569 00:27:00.569 --- 10.0.0.2 ping statistics --- 00:27:00.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.569 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:27:00.569 00:27:00.569 --- 10.0.0.1 ping statistics --- 00:27:00.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.569 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=730504 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 730504 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 730504 ']' 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:00.569 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:00.569 [2024-10-30 12:38:32.954096] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:00.569 [2024-10-30 12:38:32.955168] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:27:00.569 [2024-10-30 12:38:32.955229] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.569 [2024-10-30 12:38:33.025643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:00.569 [2024-10-30 12:38:33.078954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.569 [2024-10-30 12:38:33.079016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.569 [2024-10-30 12:38:33.079044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.569 [2024-10-30 12:38:33.079055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.569 [2024-10-30 12:38:33.079064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.569 [2024-10-30 12:38:33.080494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.569 [2024-10-30 12:38:33.080629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.569 [2024-10-30 12:38:33.080633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.569 [2024-10-30 12:38:33.163563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:00.569 [2024-10-30 12:38:33.163810] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:00.569 [2024-10-30 12:38:33.163822] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:00.569 [2024-10-30 12:38:33.164076] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:00.569 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:00.569 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:27:00.569 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:00.569 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:00.569 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:00.569 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.569 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:00.569 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:00.826 [2024-10-30 12:38:33.477288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.826 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:01.395 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.395 [2024-10-30 12:38:34.025863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.395 12:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:01.654 12:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:01.912 Malloc0 00:27:02.170 12:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:02.427 Delay0 00:27:02.427 12:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.709 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:02.966 NULL1 00:27:02.966 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:03.223 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=730809 00:27:03.223 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:03.223 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:03.223 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:04.599 Read completed with error (sct=0, sc=11) 00:27:04.599 12:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:04.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:04.599 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:04.599 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:05.164 true 00:27:05.164 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:05.164 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.728 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:05.986 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:05.986 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:06.244 true 00:27:06.244 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:06.244 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:06.501 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:06.759 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:06.759 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:07.016 true 00:27:07.016 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:07.016 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.012 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.012 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:08.012 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:08.270 true 00:27:08.270 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:08.270 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.527 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.784 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:08.784 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:09.041 true 00:27:09.299 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:09.299 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:09.556 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:09.814 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:09.814 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:10.071 true 00:27:10.071 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:10.071 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.004 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.261 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:11.261 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:11.519 true 00:27:11.519 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:11.519 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.777 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:12.034 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:12.034 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:12.292 true 00:27:12.292 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:12.292 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:12.549 12:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:12.806 12:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:12.806 12:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:13.063 true 00:27:13.063 12:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:13.063 12:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:13.996 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.253 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:14.253 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:14.511 true 00:27:14.511 12:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:14.511 12:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.076 12:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.076 12:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:15.076 12:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:15.334 true 00:27:15.334 12:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:15.334 12:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.591 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.156 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:16.156 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:16.156 true 00:27:16.156 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:16.156 12:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:17.528 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.528 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:17.528 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:17.785 true 00:27:17.785 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:17.785 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:18.041 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:18.297 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:18.297 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:18.554 true 00:27:18.554 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:18.554 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:18.811 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:19.069 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:19.069 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:19.327 true 00:27:19.327 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:19.327 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.261 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.518 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:20.518 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:20.776 true 00:27:20.776 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:20.776 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.032 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:21.290 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:21.290 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:21.547 true 00:27:21.547 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:21.547 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.805 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:22.062 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:22.062 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:22.319 true 00:27:22.577 12:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:22.577 12:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:23.508 12:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:23.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:23.765 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:23.765 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:24.022 true 00:27:24.022 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:24.022 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.280 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:24.538 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:24.538 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:24.796 true 00:27:24.796 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:24.796 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.726 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:25.982 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:25.982 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:26.239 true 00:27:26.239 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:26.239 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:26.496 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:26.753 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:26.753 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:27.011 true 00:27:27.011 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:27.011 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:27.269 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:27.526 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:27.526 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:27.783 true 00:27:27.783 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:27.783 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.172 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:29.172 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:29.172 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:29.429 true 00:27:29.429 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:29.429 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.685 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:29.942 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:29.942 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:30.198 true 00:27:30.198 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:30.198 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.455 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:30.711 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:30.711 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:30.968 true 00:27:30.968 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:30.968 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:31.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:31.901 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:32.159 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:32.159 12:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:32.723 true 00:27:32.723 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:32.723 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.980 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:33.238 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:33.238 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:33.496 true 00:27:33.496 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:33.496 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.496 Initializing NVMe Controllers 00:27:33.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:33.496 Controller IO queue size 128, less than required. 00:27:33.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.496 Controller IO queue size 128, less than required. 00:27:33.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:33.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:33.496 Initialization complete. Launching workers. 00:27:33.496 ======================================================== 00:27:33.496 Latency(us) 00:27:33.496 Device Information : IOPS MiB/s Average min max 00:27:33.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 510.04 0.25 102688.44 3394.71 1014793.73 00:27:33.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8513.13 4.16 15036.67 2261.60 460210.29 00:27:33.496 ======================================================== 00:27:33.496 Total : 9023.17 4.41 19991.24 2261.60 1014793.73 00:27:33.496 00:27:33.802 12:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.078 12:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:34.078 12:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:34.336 true 00:27:34.336 12:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730809 00:27:34.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (730809) - No such process 00:27:34.336 12:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 730809 00:27:34.336 12:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:34.593 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:34.851 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:34.851 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:34.851 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:34.851 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:34.851 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:35.109 null0 00:27:35.109 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:35.109 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:35.109 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:35.367 null1 00:27:35.367 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:35.367 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:35.367 12:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:35.625 null2 00:27:35.625 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:35.625 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:35.625 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:35.883 null3 00:27:35.883 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:35.883 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:35.883 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:36.141 null4 00:27:36.141 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:36.141 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:36.141 12:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:36.399 null5 00:27:36.399 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:36.399 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:36.399 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:36.657 null6 00:27:36.657 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:36.657 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:36.657 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:36.915 null7 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.915 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 735441 735442 735444 735446 735448 735450 735452 735454 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.916 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:37.173 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:37.173 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.173 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:37.173 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:37.431 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:37.431 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:37.431 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:37.431 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.689 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:37.947 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:37.947 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.947 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:37.947 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:37.947 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:37.947 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:37.947 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:37.947 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.206 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:38.464 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:38.464 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:38.464 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.464 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:38.464 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:38.464 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:38.464 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:38.464 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.722 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:38.980 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:38.980 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.980 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:38.980 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:38.980 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:39.237 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:39.237 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:39.237 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.495 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:39.753 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:39.753 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:39.753 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:39.753 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.753 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:39.753 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:39.753 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:39.753 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.011 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:40.269 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:40.269 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:40.269 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:40.269 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.269 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:40.269 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:40.269 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:40.269 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.527 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:40.786 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:40.786 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:40.786 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:40.786 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.786 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:40.786 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:40.786 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:40.786 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.418 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:41.418 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:41.418 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:41.418 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:41.418 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:41.418 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:41.418 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.418 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:41.418 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.676 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:41.934 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.934 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.934 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:42.191 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:42.191 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:42.191 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:42.191 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:42.191 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:42.191 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:42.191 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.191 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:42.449 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.449 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.449 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:42.449 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.449 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.450 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:42.707 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:42.707 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:42.707 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:42.707 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:42.707 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:42.707 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:42.707 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:42.707 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.965 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.965 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.965 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.965 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.965 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.965 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.965 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:42.966 rmmod nvme_tcp 00:27:42.966 rmmod nvme_fabrics 00:27:42.966 rmmod nvme_keyring 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 730504 ']' 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 730504 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 730504 ']' 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 730504 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 730504 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 730504' 00:27:42.966 killing process with pid 730504 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 730504 00:27:42.966 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 730504 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.224 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.756 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:45.756 00:27:45.756 real 0m47.510s 00:27:45.756 user 3m17.784s 00:27:45.756 sys 0m22.527s 00:27:45.756 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:45.756 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:45.756 ************************************ 00:27:45.756 END TEST nvmf_ns_hotplug_stress 00:27:45.756 ************************************ 00:27:45.756 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:45.756 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:45.756 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:45.756 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:45.756 ************************************ 00:27:45.756 START TEST nvmf_delete_subsystem 00:27:45.756 ************************************ 00:27:45.756 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:45.756 * Looking for test storage... 00:27:45.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:45.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.756 --rc genhtml_branch_coverage=1 00:27:45.756 --rc genhtml_function_coverage=1 00:27:45.756 --rc genhtml_legend=1 00:27:45.756 --rc geninfo_all_blocks=1 00:27:45.756 --rc geninfo_unexecuted_blocks=1 00:27:45.756 00:27:45.756 ' 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:45.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.756 --rc genhtml_branch_coverage=1 00:27:45.756 --rc genhtml_function_coverage=1 00:27:45.756 --rc genhtml_legend=1 00:27:45.756 --rc geninfo_all_blocks=1 00:27:45.756 --rc geninfo_unexecuted_blocks=1 00:27:45.756 00:27:45.756 ' 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:45.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.756 --rc genhtml_branch_coverage=1 00:27:45.756 --rc genhtml_function_coverage=1 00:27:45.756 --rc genhtml_legend=1 00:27:45.756 --rc geninfo_all_blocks=1 00:27:45.756 --rc geninfo_unexecuted_blocks=1 00:27:45.756 00:27:45.756 ' 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:45.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.756 --rc genhtml_branch_coverage=1 00:27:45.756 --rc genhtml_function_coverage=1 00:27:45.756 --rc genhtml_legend=1 00:27:45.756 --rc geninfo_all_blocks=1 00:27:45.756 --rc geninfo_unexecuted_blocks=1 00:27:45.756 00:27:45.756 ' 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.756 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:47.657 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.657 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:47.658 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:47.658 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:47.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:47.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.658 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:47.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:27:47.659 00:27:47.659 --- 10.0.0.2 ping statistics --- 00:27:47.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.659 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:27:47.659 00:27:47.659 --- 10.0.0.1 ping statistics --- 00:27:47.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.659 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:47.659 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:47.917 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:47.917 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:47.917 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:47.917 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:47.917 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=738323 00:27:47.917 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:47.917 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 738323 00:27:47.917 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 738323 ']' 00:27:47.917 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.918 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:47.918 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.918 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:47.918 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:47.918 [2024-10-30 12:39:20.414646] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:47.918 [2024-10-30 12:39:20.415760] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:27:47.918 [2024-10-30 12:39:20.415832] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.918 [2024-10-30 12:39:20.489474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:47.918 [2024-10-30 12:39:20.548548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.918 [2024-10-30 12:39:20.548624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.918 [2024-10-30 12:39:20.548652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.918 [2024-10-30 12:39:20.548664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.918 [2024-10-30 12:39:20.548673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.918 [2024-10-30 12:39:20.550091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.918 [2024-10-30 12:39:20.550096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.176 [2024-10-30 12:39:20.646927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:48.176 [2024-10-30 12:39:20.646950] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:48.176 [2024-10-30 12:39:20.647202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 [2024-10-30 12:39:20.698717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 [2024-10-30 12:39:20.714991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 NULL1 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 Delay0 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=738346 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:48.176 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:48.176 [2024-10-30 12:39:20.795764] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:50.075 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.075 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.075 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 [2024-10-30 12:39:22.998225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff5e400d310 is same with the state(6) to be set 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 starting I/O failed: -6 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.333 Write completed with error (sct=0, sc=8) 00:27:50.333 Read completed with error (sct=0, sc=8) 00:27:50.334 starting I/O failed: -6 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 starting I/O failed: -6 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 starting I/O failed: -6 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 starting I/O failed: -6 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 starting I/O failed: -6 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 starting I/O failed: -6 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 starting I/O failed: -6 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 starting I/O failed: -6 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 starting I/O failed: -6 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 starting I/O failed: -6 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 [2024-10-30 12:39:22.998864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe7680 is same with the state(6) to be set 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 [2024-10-30 12:39:22.999322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff5e4000c00 is same with the state(6) to be set 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Read completed with error (sct=0, sc=8) 00:27:50.334 Write completed with error (sct=0, sc=8) 00:27:51.707 [2024-10-30 12:39:23.977570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe89a0 is same with the state(6) to be set 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Write completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Write completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Write completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Write completed with error (sct=0, sc=8) 00:27:51.707 Read completed with error (sct=0, sc=8) 00:27:51.707 Write completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 [2024-10-30 12:39:24.000874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe7860 is same with the state(6) to be set 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 [2024-10-30 12:39:24.001086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe74a0 is same with the state(6) to be set 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 [2024-10-30 12:39:24.001625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff5e400d640 is same with the state(6) to be set 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Write completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 Read completed with error (sct=0, sc=8) 00:27:51.708 [2024-10-30 12:39:24.002412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff5e400cfe0 is same with the state(6) to be set 00:27:51.708 Initializing NVMe Controllers 00:27:51.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:51.708 Controller IO queue size 128, less than required. 00:27:51.708 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:51.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:51.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:51.708 Initialization complete. Launching workers. 00:27:51.708 ======================================================== 00:27:51.708 Latency(us) 00:27:51.708 Device Information : IOPS MiB/s Average min max 00:27:51.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.65 0.08 890116.16 520.60 1012892.45 00:27:51.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.26 0.08 921644.20 747.34 1013084.76 00:27:51.708 ======================================================== 00:27:51.708 Total : 329.91 0.16 905240.13 520.60 1013084.76 00:27:51.708 00:27:51.708 [2024-10-30 12:39:24.003022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe89a0 (9): Bad file descriptor 00:27:51.708 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:51.708 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:51.708 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 738346 00:27:51.708 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 738346 00:27:51.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (738346) - No such process 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 738346 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 738346 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 738346 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:51.966 [2024-10-30 12:39:24.522903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=738864 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 738864 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:51.966 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:51.966 [2024-10-30 12:39:24.586753] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:52.529 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:52.529 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 738864 00:27:52.529 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:53.094 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:53.094 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 738864 00:27:53.094 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:53.659 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:53.659 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 738864 00:27:53.659 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:53.917 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:53.917 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 738864 00:27:53.917 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:54.482 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:54.482 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 738864 00:27:54.482 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:55.047 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:55.047 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 738864 00:27:55.047 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:55.305 Initializing NVMe Controllers 00:27:55.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.305 Controller IO queue size 128, less than required. 00:27:55.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:55.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:55.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:55.305 Initialization complete. Launching workers. 00:27:55.305 ======================================================== 00:27:55.305 Latency(us) 00:27:55.305 Device Information : IOPS MiB/s Average min max 00:27:55.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004849.00 1000200.89 1041490.17 00:27:55.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004061.77 1000178.63 1010676.54 00:27:55.305 ======================================================== 00:27:55.305 Total : 256.00 0.12 1004455.38 1000178.63 1041490.17 00:27:55.305 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 738864 00:27:55.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (738864) - No such process 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 738864 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:55.562 rmmod nvme_tcp 00:27:55.562 rmmod nvme_fabrics 00:27:55.562 rmmod nvme_keyring 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 738323 ']' 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 738323 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 738323 ']' 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 738323 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 738323 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 738323' 00:27:55.562 killing process with pid 738323 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 738323 00:27:55.562 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 738323 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.820 12:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.351 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:58.351 00:27:58.351 real 0m12.474s 00:27:58.351 user 0m24.745s 00:27:58.351 sys 0m3.936s 00:27:58.351 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:58.351 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:58.351 ************************************ 00:27:58.352 END TEST nvmf_delete_subsystem 00:27:58.352 ************************************ 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:58.352 ************************************ 00:27:58.352 START TEST nvmf_host_management 00:27:58.352 ************************************ 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:58.352 * Looking for test storage... 00:27:58.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:58.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.352 --rc genhtml_branch_coverage=1 00:27:58.352 --rc genhtml_function_coverage=1 00:27:58.352 --rc genhtml_legend=1 00:27:58.352 --rc geninfo_all_blocks=1 00:27:58.352 --rc geninfo_unexecuted_blocks=1 00:27:58.352 00:27:58.352 ' 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:58.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.352 --rc genhtml_branch_coverage=1 00:27:58.352 --rc genhtml_function_coverage=1 00:27:58.352 --rc genhtml_legend=1 00:27:58.352 --rc geninfo_all_blocks=1 00:27:58.352 --rc geninfo_unexecuted_blocks=1 00:27:58.352 00:27:58.352 ' 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:58.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.352 --rc genhtml_branch_coverage=1 00:27:58.352 --rc genhtml_function_coverage=1 00:27:58.352 --rc genhtml_legend=1 00:27:58.352 --rc geninfo_all_blocks=1 00:27:58.352 --rc geninfo_unexecuted_blocks=1 00:27:58.352 00:27:58.352 ' 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:58.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.352 --rc genhtml_branch_coverage=1 00:27:58.352 --rc genhtml_function_coverage=1 00:27:58.352 --rc genhtml_legend=1 00:27:58.352 --rc geninfo_all_blocks=1 00:27:58.352 --rc geninfo_unexecuted_blocks=1 00:27:58.352 00:27:58.352 ' 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.352 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.353 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.251 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:00.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:00.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:00.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:00.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:00.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:28:00.252 00:28:00.252 --- 10.0.0.2 ping statistics --- 00:28:00.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.252 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:28:00.252 00:28:00.252 --- 10.0.0.1 ping statistics --- 00:28:00.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.252 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:00.252 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=741205 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 741205 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 741205 ']' 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:00.253 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.510 [2024-10-30 12:39:32.949738] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:00.510 [2024-10-30 12:39:32.950751] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:28:00.510 [2024-10-30 12:39:32.950800] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.510 [2024-10-30 12:39:33.031100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.510 [2024-10-30 12:39:33.096977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.510 [2024-10-30 12:39:33.097033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.510 [2024-10-30 12:39:33.097062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.510 [2024-10-30 12:39:33.097074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.510 [2024-10-30 12:39:33.097083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.510 [2024-10-30 12:39:33.098791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.510 [2024-10-30 12:39:33.098851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.510 [2024-10-30 12:39:33.098874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:00.510 [2024-10-30 12:39:33.098877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.768 [2024-10-30 12:39:33.199400] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:00.768 [2024-10-30 12:39:33.199642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:00.768 [2024-10-30 12:39:33.199896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:00.768 [2024-10-30 12:39:33.200594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:00.768 [2024-10-30 12:39:33.200839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.768 [2024-10-30 12:39:33.251633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.768 Malloc0 00:28:00.768 [2024-10-30 12:39:33.323838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=741255 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 741255 /var/tmp/bdevperf.sock 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 741255 ']' 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:00.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:00.768 { 00:28:00.768 "params": { 00:28:00.768 "name": "Nvme$subsystem", 00:28:00.768 "trtype": "$TEST_TRANSPORT", 00:28:00.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.768 "adrfam": "ipv4", 00:28:00.768 "trsvcid": "$NVMF_PORT", 00:28:00.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.768 "hdgst": ${hdgst:-false}, 00:28:00.768 "ddgst": ${ddgst:-false} 00:28:00.768 }, 00:28:00.768 "method": "bdev_nvme_attach_controller" 00:28:00.768 } 00:28:00.768 EOF 00:28:00.768 )") 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:00.768 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:00.768 "params": { 00:28:00.768 "name": "Nvme0", 00:28:00.768 "trtype": "tcp", 00:28:00.768 "traddr": "10.0.0.2", 00:28:00.768 "adrfam": "ipv4", 00:28:00.768 "trsvcid": "4420", 00:28:00.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:00.768 "hdgst": false, 00:28:00.768 "ddgst": false 00:28:00.768 }, 00:28:00.768 "method": "bdev_nvme_attach_controller" 00:28:00.768 }' 00:28:00.768 [2024-10-30 12:39:33.405600] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:28:00.768 [2024-10-30 12:39:33.405691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741255 ] 00:28:01.026 [2024-10-30 12:39:33.474966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.026 [2024-10-30 12:39:33.534965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.283 Running I/O for 10 seconds... 00:28:01.283 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:01.283 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:28:01.283 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:01.283 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.283 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:01.283 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:01.284 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:01.542 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:01.542 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:01.542 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.543 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:01.543 [2024-10-30 12:39:34.111801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.111864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.111894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.111910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.111926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.111941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.111957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.111972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.111998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.543 [2024-10-30 12:39:34.112917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.543 [2024-10-30 12:39:34.112932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.112946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.112961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.112975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.112990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.544 [2024-10-30 12:39:34.113818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.113860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:01.544 [2024-10-30 12:39:34.113995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.544 [2024-10-30 12:39:34.114018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.114034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.544 [2024-10-30 12:39:34.114048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.114066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.544 [2024-10-30 12:39:34.114080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.114094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.544 [2024-10-30 12:39:34.114108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.114120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c87a40 is same with the state(6) to be set 00:28:01.544 [2024-10-30 12:39:34.115264] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:01.544 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.544 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:01.544 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.544 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:01.544 task offset: 76928 on job bdev=Nvme0n1 fails 00:28:01.544 00:28:01.544 Latency(us) 00:28:01.544 [2024-10-30T11:39:34.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.544 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.544 Job: Nvme0n1 ended in about 0.40 seconds with error 00:28:01.544 Verification LBA range: start 0x0 length 0x400 00:28:01.544 Nvme0n1 : 0.40 1437.65 89.85 159.74 0.00 38931.91 2694.26 38836.15 00:28:01.544 [2024-10-30T11:39:34.225Z] =================================================================================================================== 00:28:01.544 [2024-10-30T11:39:34.225Z] Total : 1437.65 89.85 159.74 0.00 38931.91 2694.26 38836.15 00:28:01.544 [2024-10-30 12:39:34.117176] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:01.544 [2024-10-30 12:39:34.117203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c87a40 (9): Bad file descriptor 00:28:01.544 [2024-10-30 12:39:34.118393] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:01.544 [2024-10-30 12:39:34.118492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:01.544 [2024-10-30 12:39:34.118520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.544 [2024-10-30 12:39:34.118545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:01.545 [2024-10-30 12:39:34.118570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:01.545 [2024-10-30 12:39:34.118584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.545 [2024-10-30 12:39:34.118595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c87a40 00:28:01.545 [2024-10-30 12:39:34.118632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c87a40 (9): Bad file descriptor 00:28:01.545 [2024-10-30 12:39:34.118657] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:01.545 [2024-10-30 12:39:34.118671] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:01.545 [2024-10-30 12:39:34.118691] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:01.545 [2024-10-30 12:39:34.118715] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:01.545 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.545 12:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 741255 00:28:02.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (741255) - No such process 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:02.476 { 00:28:02.476 "params": { 00:28:02.476 "name": "Nvme$subsystem", 00:28:02.476 "trtype": "$TEST_TRANSPORT", 00:28:02.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.476 "adrfam": "ipv4", 00:28:02.476 "trsvcid": "$NVMF_PORT", 00:28:02.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.476 "hdgst": ${hdgst:-false}, 00:28:02.476 "ddgst": ${ddgst:-false} 00:28:02.476 }, 00:28:02.476 "method": "bdev_nvme_attach_controller" 00:28:02.476 } 00:28:02.476 EOF 00:28:02.476 )") 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:02.476 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:02.476 "params": { 00:28:02.476 "name": "Nvme0", 00:28:02.476 "trtype": "tcp", 00:28:02.476 "traddr": "10.0.0.2", 00:28:02.476 "adrfam": "ipv4", 00:28:02.476 "trsvcid": "4420", 00:28:02.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.476 "hdgst": false, 00:28:02.476 "ddgst": false 00:28:02.476 }, 00:28:02.476 "method": "bdev_nvme_attach_controller" 00:28:02.476 }' 00:28:02.734 [2024-10-30 12:39:35.171753] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:28:02.734 [2024-10-30 12:39:35.171828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741522 ] 00:28:02.734 [2024-10-30 12:39:35.241667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.734 [2024-10-30 12:39:35.300143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.991 Running I/O for 1 seconds... 00:28:03.920 1621.00 IOPS, 101.31 MiB/s 00:28:03.920 Latency(us) 00:28:03.920 [2024-10-30T11:39:36.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.920 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.920 Verification LBA range: start 0x0 length 0x400 00:28:03.920 Nvme0n1 : 1.04 1669.09 104.32 0.00 0.00 37729.52 5655.51 33787.45 00:28:03.920 [2024-10-30T11:39:36.601Z] =================================================================================================================== 00:28:03.920 [2024-10-30T11:39:36.601Z] Total : 1669.09 104.32 0.00 0.00 37729.52 5655.51 33787.45 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.176 rmmod nvme_tcp 00:28:04.176 rmmod nvme_fabrics 00:28:04.176 rmmod nvme_keyring 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 741205 ']' 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 741205 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 741205 ']' 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 741205 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:04.176 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 741205 00:28:04.432 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:04.432 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:04.432 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 741205' 00:28:04.432 killing process with pid 741205 00:28:04.432 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 741205 00:28:04.432 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 741205 00:28:04.432 [2024-10-30 12:39:37.092733] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.690 12:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.594 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:06.594 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:06.594 00:28:06.594 real 0m8.697s 00:28:06.594 user 0m16.907s 00:28:06.594 sys 0m3.653s 00:28:06.594 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:06.594 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:06.594 ************************************ 00:28:06.594 END TEST nvmf_host_management 00:28:06.594 ************************************ 00:28:06.594 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:06.594 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:06.594 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:06.594 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:06.594 ************************************ 00:28:06.594 START TEST nvmf_lvol 00:28:06.594 ************************************ 00:28:06.594 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:06.594 * Looking for test storage... 00:28:06.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:06.594 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:06.854 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:06.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.855 --rc genhtml_branch_coverage=1 00:28:06.855 --rc genhtml_function_coverage=1 00:28:06.855 --rc genhtml_legend=1 00:28:06.855 --rc geninfo_all_blocks=1 00:28:06.855 --rc geninfo_unexecuted_blocks=1 00:28:06.855 00:28:06.855 ' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:06.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.855 --rc genhtml_branch_coverage=1 00:28:06.855 --rc genhtml_function_coverage=1 00:28:06.855 --rc genhtml_legend=1 00:28:06.855 --rc geninfo_all_blocks=1 00:28:06.855 --rc geninfo_unexecuted_blocks=1 00:28:06.855 00:28:06.855 ' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:06.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.855 --rc genhtml_branch_coverage=1 00:28:06.855 --rc genhtml_function_coverage=1 00:28:06.855 --rc genhtml_legend=1 00:28:06.855 --rc geninfo_all_blocks=1 00:28:06.855 --rc geninfo_unexecuted_blocks=1 00:28:06.855 00:28:06.855 ' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:06.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.855 --rc genhtml_branch_coverage=1 00:28:06.855 --rc genhtml_function_coverage=1 00:28:06.855 --rc genhtml_legend=1 00:28:06.855 --rc geninfo_all_blocks=1 00:28:06.855 --rc geninfo_unexecuted_blocks=1 00:28:06.855 00:28:06.855 ' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.855 12:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.384 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.384 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.384 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.384 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.384 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:09.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:28:09.385 00:28:09.385 --- 10.0.0.2 ping statistics --- 00:28:09.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.385 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:28:09.385 00:28:09.385 --- 10.0.0.1 ping statistics --- 00:28:09.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.385 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=743723 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 743723 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 743723 ']' 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:09.385 [2024-10-30 12:39:41.690264] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:09.385 [2024-10-30 12:39:41.691276] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:28:09.385 [2024-10-30 12:39:41.691324] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.385 [2024-10-30 12:39:41.763838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:09.385 [2024-10-30 12:39:41.822838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.385 [2024-10-30 12:39:41.822903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.385 [2024-10-30 12:39:41.822932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.385 [2024-10-30 12:39:41.822943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.385 [2024-10-30 12:39:41.822953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.385 [2024-10-30 12:39:41.824497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.385 [2024-10-30 12:39:41.824526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.385 [2024-10-30 12:39:41.824544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.385 [2024-10-30 12:39:41.919342] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:09.385 [2024-10-30 12:39:41.919551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:09.385 [2024-10-30 12:39:41.919579] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:09.385 [2024-10-30 12:39:41.919834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.385 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:09.645 [2024-10-30 12:39:42.217328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.645 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:09.902 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:09.902 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:10.160 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:10.160 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:10.724 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:10.724 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=05228153-94bd-4ff4-b529-7946f826887c 00:28:10.724 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 05228153-94bd-4ff4-b529-7946f826887c lvol 20 00:28:10.982 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8d9c9842-aa1d-4989-bb90-22368ffc4b7d 00:28:10.982 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:11.548 12:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8d9c9842-aa1d-4989-bb90-22368ffc4b7d 00:28:11.548 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:11.805 [2024-10-30 12:39:44.465513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.805 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:12.370 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=744032 00:28:12.370 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:12.370 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:13.304 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8d9c9842-aa1d-4989-bb90-22368ffc4b7d MY_SNAPSHOT 00:28:13.562 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=79df131c-147e-425d-bfaf-cd8feeb2f6b3 00:28:13.562 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8d9c9842-aa1d-4989-bb90-22368ffc4b7d 30 00:28:13.819 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 79df131c-147e-425d-bfaf-cd8feeb2f6b3 MY_CLONE 00:28:14.077 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d6b8a074-0741-4501-9b90-cf8597877e2d 00:28:14.077 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d6b8a074-0741-4501-9b90-cf8597877e2d 00:28:15.014 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 744032 00:28:23.140 Initializing NVMe Controllers 00:28:23.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:23.140 Controller IO queue size 128, less than required. 00:28:23.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:23.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:23.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:23.140 Initialization complete. Launching workers. 00:28:23.140 ======================================================== 00:28:23.140 Latency(us) 00:28:23.140 Device Information : IOPS MiB/s Average min max 00:28:23.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10590.10 41.37 12087.74 347.64 83992.93 00:28:23.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10505.91 41.04 12183.60 1225.74 77704.03 00:28:23.140 ======================================================== 00:28:23.140 Total : 21096.01 82.41 12135.48 347.64 83992.93 00:28:23.140 00:28:23.140 12:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:23.140 12:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d9c9842-aa1d-4989-bb90-22368ffc4b7d 00:28:23.140 12:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05228153-94bd-4ff4-b529-7946f826887c 00:28:23.397 12:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:23.397 12:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:23.397 12:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:23.397 12:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.397 rmmod nvme_tcp 00:28:23.397 rmmod nvme_fabrics 00:28:23.397 rmmod nvme_keyring 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 743723 ']' 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 743723 00:28:23.397 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 743723 ']' 00:28:23.398 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 743723 00:28:23.398 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:28:23.398 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:23.398 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 743723 00:28:23.398 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:23.398 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:23.398 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 743723' 00:28:23.398 killing process with pid 743723 00:28:23.398 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 743723 00:28:23.398 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 743723 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.963 12:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.871 00:28:25.871 real 0m19.174s 00:28:25.871 user 0m56.597s 00:28:25.871 sys 0m7.662s 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:25.871 ************************************ 00:28:25.871 END TEST nvmf_lvol 00:28:25.871 ************************************ 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:25.871 ************************************ 00:28:25.871 START TEST nvmf_lvs_grow 00:28:25.871 ************************************ 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:25.871 * Looking for test storage... 00:28:25.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:28:25.871 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:26.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.130 --rc genhtml_branch_coverage=1 00:28:26.130 --rc genhtml_function_coverage=1 00:28:26.130 --rc genhtml_legend=1 00:28:26.130 --rc geninfo_all_blocks=1 00:28:26.130 --rc geninfo_unexecuted_blocks=1 00:28:26.130 00:28:26.130 ' 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:26.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.130 --rc genhtml_branch_coverage=1 00:28:26.130 --rc genhtml_function_coverage=1 00:28:26.130 --rc genhtml_legend=1 00:28:26.130 --rc geninfo_all_blocks=1 00:28:26.130 --rc geninfo_unexecuted_blocks=1 00:28:26.130 00:28:26.130 ' 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:26.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.130 --rc genhtml_branch_coverage=1 00:28:26.130 --rc genhtml_function_coverage=1 00:28:26.130 --rc genhtml_legend=1 00:28:26.130 --rc geninfo_all_blocks=1 00:28:26.130 --rc geninfo_unexecuted_blocks=1 00:28:26.130 00:28:26.130 ' 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:26.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.130 --rc genhtml_branch_coverage=1 00:28:26.130 --rc genhtml_function_coverage=1 00:28:26.130 --rc genhtml_legend=1 00:28:26.130 --rc geninfo_all_blocks=1 00:28:26.130 --rc geninfo_unexecuted_blocks=1 00:28:26.130 00:28:26.130 ' 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.130 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.131 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:28.032 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:28.032 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.032 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.033 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.033 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:28.033 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:28.033 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:28.291 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:28.291 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:28.291 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:28.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:28:28.291 00:28:28.291 --- 10.0.0.2 ping statistics --- 00:28:28.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.292 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:28:28.292 00:28:28.292 --- 10.0.0.1 ping statistics --- 00:28:28.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.292 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=747400 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 747400 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 747400 ']' 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:28.292 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:28.292 [2024-10-30 12:40:00.961597] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:28.292 [2024-10-30 12:40:00.962677] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:28:28.292 [2024-10-30 12:40:00.962725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.550 [2024-10-30 12:40:01.032748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.550 [2024-10-30 12:40:01.085093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.550 [2024-10-30 12:40:01.085167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.550 [2024-10-30 12:40:01.085191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.550 [2024-10-30 12:40:01.085202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.550 [2024-10-30 12:40:01.085212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.550 [2024-10-30 12:40:01.085858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.550 [2024-10-30 12:40:01.168025] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:28.550 [2024-10-30 12:40:01.168324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:28.550 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:28.550 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:28:28.550 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.550 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:28.550 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:28.550 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.550 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:28.808 [2024-10-30 12:40:01.462475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.808 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:28.808 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:28.808 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:28.808 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:29.067 ************************************ 00:28:29.067 START TEST lvs_grow_clean 00:28:29.067 ************************************ 00:28:29.068 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:28:29.068 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:29.068 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:29.068 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:29.068 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:29.068 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:29.068 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:29.068 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:29.068 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:29.068 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:29.326 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:29.326 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:29.584 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ceb61a22-c35b-4184-9b10-796626f5893f 00:28:29.584 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:29.584 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:29.842 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:29.842 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:29.842 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ceb61a22-c35b-4184-9b10-796626f5893f lvol 150 00:28:30.099 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ad5353d7-7884-45cc-bdf6-de04e9b34a11 00:28:30.099 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:30.099 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:30.357 [2024-10-30 12:40:02.898332] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:30.357 [2024-10-30 12:40:02.898421] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:30.357 true 00:28:30.357 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:30.357 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:30.615 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:30.615 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:30.872 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ad5353d7-7884-45cc-bdf6-de04e9b34a11 00:28:31.129 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:31.387 [2024-10-30 12:40:03.994638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.387 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:31.644 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=747833 00:28:31.645 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:31.645 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:31.645 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 747833 /var/tmp/bdevperf.sock 00:28:31.645 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 747833 ']' 00:28:31.645 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:31.645 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:31.645 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:31.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:31.645 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:31.645 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.903 [2024-10-30 12:40:04.331189] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:28:31.903 [2024-10-30 12:40:04.331302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747833 ] 00:28:31.903 [2024-10-30 12:40:04.397564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.903 [2024-10-30 12:40:04.460169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.903 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:31.903 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:28:31.903 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:32.468 Nvme0n1 00:28:32.468 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:32.726 [ 00:28:32.726 { 00:28:32.726 "name": "Nvme0n1", 00:28:32.726 "aliases": [ 00:28:32.726 "ad5353d7-7884-45cc-bdf6-de04e9b34a11" 00:28:32.726 ], 00:28:32.726 "product_name": "NVMe disk", 00:28:32.726 "block_size": 4096, 00:28:32.726 "num_blocks": 38912, 00:28:32.726 "uuid": "ad5353d7-7884-45cc-bdf6-de04e9b34a11", 00:28:32.726 "numa_id": 0, 00:28:32.726 "assigned_rate_limits": { 00:28:32.726 "rw_ios_per_sec": 0, 00:28:32.726 "rw_mbytes_per_sec": 0, 00:28:32.726 "r_mbytes_per_sec": 0, 00:28:32.726 "w_mbytes_per_sec": 0 00:28:32.726 }, 00:28:32.726 "claimed": false, 00:28:32.726 "zoned": false, 00:28:32.726 "supported_io_types": { 00:28:32.726 "read": true, 00:28:32.726 "write": true, 00:28:32.726 "unmap": true, 00:28:32.726 "flush": true, 00:28:32.726 "reset": true, 00:28:32.726 "nvme_admin": true, 00:28:32.726 "nvme_io": true, 00:28:32.726 "nvme_io_md": false, 00:28:32.726 "write_zeroes": true, 00:28:32.726 "zcopy": false, 00:28:32.726 "get_zone_info": false, 00:28:32.726 "zone_management": false, 00:28:32.726 "zone_append": false, 00:28:32.726 "compare": true, 00:28:32.726 "compare_and_write": true, 00:28:32.726 "abort": true, 00:28:32.726 "seek_hole": false, 00:28:32.726 "seek_data": false, 00:28:32.726 "copy": true, 00:28:32.726 "nvme_iov_md": false 00:28:32.726 }, 00:28:32.726 "memory_domains": [ 00:28:32.726 { 00:28:32.726 "dma_device_id": "system", 00:28:32.726 "dma_device_type": 1 00:28:32.726 } 00:28:32.726 ], 00:28:32.726 "driver_specific": { 00:28:32.726 "nvme": [ 00:28:32.726 { 00:28:32.726 "trid": { 00:28:32.726 "trtype": "TCP", 00:28:32.726 "adrfam": "IPv4", 00:28:32.726 "traddr": "10.0.0.2", 00:28:32.726 "trsvcid": "4420", 00:28:32.726 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:32.726 }, 00:28:32.726 "ctrlr_data": { 00:28:32.726 "cntlid": 1, 00:28:32.726 "vendor_id": "0x8086", 00:28:32.726 "model_number": "SPDK bdev Controller", 00:28:32.726 "serial_number": "SPDK0", 00:28:32.726 "firmware_revision": "25.01", 00:28:32.726 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:32.726 "oacs": { 00:28:32.726 "security": 0, 00:28:32.726 "format": 0, 00:28:32.726 "firmware": 0, 00:28:32.726 "ns_manage": 0 00:28:32.726 }, 00:28:32.726 "multi_ctrlr": true, 00:28:32.726 "ana_reporting": false 00:28:32.726 }, 00:28:32.726 "vs": { 00:28:32.726 "nvme_version": "1.3" 00:28:32.726 }, 00:28:32.726 "ns_data": { 00:28:32.726 "id": 1, 00:28:32.726 "can_share": true 00:28:32.726 } 00:28:32.726 } 00:28:32.726 ], 00:28:32.726 "mp_policy": "active_passive" 00:28:32.726 } 00:28:32.726 } 00:28:32.726 ] 00:28:32.726 12:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=747852 00:28:32.726 12:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:32.726 12:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:32.726 Running I/O for 10 seconds... 00:28:33.659 Latency(us) 00:28:33.659 [2024-10-30T11:40:06.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.659 Nvme0n1 : 1.00 14880.00 58.12 0.00 0.00 0.00 0.00 0.00 00:28:33.659 [2024-10-30T11:40:06.340Z] =================================================================================================================== 00:28:33.659 [2024-10-30T11:40:06.340Z] Total : 14880.00 58.12 0.00 0.00 0.00 0.00 0.00 00:28:33.659 00:28:34.592 12:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:34.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:34.850 Nvme0n1 : 2.00 15292.00 59.73 0.00 0.00 0.00 0.00 0.00 00:28:34.850 [2024-10-30T11:40:07.531Z] =================================================================================================================== 00:28:34.850 [2024-10-30T11:40:07.531Z] Total : 15292.00 59.73 0.00 0.00 0.00 0.00 0.00 00:28:34.850 00:28:34.850 true 00:28:34.850 12:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:34.850 12:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:35.415 12:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:35.415 12:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:35.415 12:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 747852 00:28:35.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.673 Nvme0n1 : 3.00 15279.67 59.69 0.00 0.00 0.00 0.00 0.00 00:28:35.673 [2024-10-30T11:40:08.354Z] =================================================================================================================== 00:28:35.673 [2024-10-30T11:40:08.354Z] Total : 15279.67 59.69 0.00 0.00 0.00 0.00 0.00 00:28:35.673 00:28:37.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:37.098 Nvme0n1 : 4.00 15398.25 60.15 0.00 0.00 0.00 0.00 0.00 00:28:37.098 [2024-10-30T11:40:09.779Z] =================================================================================================================== 00:28:37.098 [2024-10-30T11:40:09.779Z] Total : 15398.25 60.15 0.00 0.00 0.00 0.00 0.00 00:28:37.098 00:28:37.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:37.703 Nvme0n1 : 5.00 15449.60 60.35 0.00 0.00 0.00 0.00 0.00 00:28:37.703 [2024-10-30T11:40:10.384Z] =================================================================================================================== 00:28:37.703 [2024-10-30T11:40:10.384Z] Total : 15449.60 60.35 0.00 0.00 0.00 0.00 0.00 00:28:37.703 00:28:39.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:39.083 Nvme0n1 : 6.00 15522.50 60.63 0.00 0.00 0.00 0.00 0.00 00:28:39.083 [2024-10-30T11:40:11.764Z] =================================================================================================================== 00:28:39.083 [2024-10-30T11:40:11.764Z] Total : 15522.50 60.63 0.00 0.00 0.00 0.00 0.00 00:28:39.083 00:28:40.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:40.014 Nvme0n1 : 7.00 15573.29 60.83 0.00 0.00 0.00 0.00 0.00 00:28:40.014 [2024-10-30T11:40:12.695Z] =================================================================================================================== 00:28:40.014 [2024-10-30T11:40:12.695Z] Total : 15573.29 60.83 0.00 0.00 0.00 0.00 0.00 00:28:40.014 00:28:40.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:40.949 Nvme0n1 : 8.00 15627.25 61.04 0.00 0.00 0.00 0.00 0.00 00:28:40.949 [2024-10-30T11:40:13.630Z] =================================================================================================================== 00:28:40.949 [2024-10-30T11:40:13.630Z] Total : 15627.25 61.04 0.00 0.00 0.00 0.00 0.00 00:28:40.949 00:28:41.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:41.882 Nvme0n1 : 9.00 15691.89 61.30 0.00 0.00 0.00 0.00 0.00 00:28:41.882 [2024-10-30T11:40:14.563Z] =================================================================================================================== 00:28:41.882 [2024-10-30T11:40:14.563Z] Total : 15691.89 61.30 0.00 0.00 0.00 0.00 0.00 00:28:41.882 00:28:42.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:42.817 Nvme0n1 : 10.00 15743.90 61.50 0.00 0.00 0.00 0.00 0.00 00:28:42.817 [2024-10-30T11:40:15.498Z] =================================================================================================================== 00:28:42.817 [2024-10-30T11:40:15.498Z] Total : 15743.90 61.50 0.00 0.00 0.00 0.00 0.00 00:28:42.817 00:28:42.817 00:28:42.817 Latency(us) 00:28:42.817 [2024-10-30T11:40:15.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:42.817 Nvme0n1 : 10.00 15751.94 61.53 0.00 0.00 8121.54 4636.07 17961.72 00:28:42.817 [2024-10-30T11:40:15.498Z] =================================================================================================================== 00:28:42.817 [2024-10-30T11:40:15.498Z] Total : 15751.94 61.53 0.00 0.00 8121.54 4636.07 17961.72 00:28:42.817 { 00:28:42.817 "results": [ 00:28:42.817 { 00:28:42.817 "job": "Nvme0n1", 00:28:42.817 "core_mask": "0x2", 00:28:42.817 "workload": "randwrite", 00:28:42.817 "status": "finished", 00:28:42.817 "queue_depth": 128, 00:28:42.817 "io_size": 4096, 00:28:42.817 "runtime": 10.003021, 00:28:42.817 "iops": 15751.941338521632, 00:28:42.817 "mibps": 61.531020853600126, 00:28:42.817 "io_failed": 0, 00:28:42.817 "io_timeout": 0, 00:28:42.817 "avg_latency_us": 8121.53874348102, 00:28:42.817 "min_latency_us": 4636.065185185185, 00:28:42.817 "max_latency_us": 17961.71851851852 00:28:42.817 } 00:28:42.817 ], 00:28:42.817 "core_count": 1 00:28:42.817 } 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 747833 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 747833 ']' 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 747833 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 747833 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 747833' 00:28:42.817 killing process with pid 747833 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 747833 00:28:42.817 Received shutdown signal, test time was about 10.000000 seconds 00:28:42.817 00:28:42.817 Latency(us) 00:28:42.817 [2024-10-30T11:40:15.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.817 [2024-10-30T11:40:15.498Z] =================================================================================================================== 00:28:42.817 [2024-10-30T11:40:15.498Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.817 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 747833 00:28:43.075 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:43.334 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:43.591 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:43.592 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:43.849 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:43.849 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:43.849 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:44.106 [2024-10-30 12:40:16.770426] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:44.365 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:44.623 request: 00:28:44.623 { 00:28:44.623 "uuid": "ceb61a22-c35b-4184-9b10-796626f5893f", 00:28:44.623 "method": "bdev_lvol_get_lvstores", 00:28:44.623 "req_id": 1 00:28:44.623 } 00:28:44.623 Got JSON-RPC error response 00:28:44.623 response: 00:28:44.623 { 00:28:44.623 "code": -19, 00:28:44.623 "message": "No such device" 00:28:44.623 } 00:28:44.623 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:28:44.623 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:44.623 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:44.623 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:44.623 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:44.881 aio_bdev 00:28:44.881 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ad5353d7-7884-45cc-bdf6-de04e9b34a11 00:28:44.881 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=ad5353d7-7884-45cc-bdf6-de04e9b34a11 00:28:44.881 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:44.881 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:28:44.881 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:44.881 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:44.881 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:45.139 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ad5353d7-7884-45cc-bdf6-de04e9b34a11 -t 2000 00:28:45.398 [ 00:28:45.398 { 00:28:45.398 "name": "ad5353d7-7884-45cc-bdf6-de04e9b34a11", 00:28:45.398 "aliases": [ 00:28:45.398 "lvs/lvol" 00:28:45.398 ], 00:28:45.398 "product_name": "Logical Volume", 00:28:45.398 "block_size": 4096, 00:28:45.398 "num_blocks": 38912, 00:28:45.398 "uuid": "ad5353d7-7884-45cc-bdf6-de04e9b34a11", 00:28:45.398 "assigned_rate_limits": { 00:28:45.398 "rw_ios_per_sec": 0, 00:28:45.398 "rw_mbytes_per_sec": 0, 00:28:45.398 "r_mbytes_per_sec": 0, 00:28:45.398 "w_mbytes_per_sec": 0 00:28:45.398 }, 00:28:45.398 "claimed": false, 00:28:45.398 "zoned": false, 00:28:45.398 "supported_io_types": { 00:28:45.398 "read": true, 00:28:45.398 "write": true, 00:28:45.398 "unmap": true, 00:28:45.398 "flush": false, 00:28:45.398 "reset": true, 00:28:45.398 "nvme_admin": false, 00:28:45.398 "nvme_io": false, 00:28:45.398 "nvme_io_md": false, 00:28:45.398 "write_zeroes": true, 00:28:45.398 "zcopy": false, 00:28:45.398 "get_zone_info": false, 00:28:45.398 "zone_management": false, 00:28:45.398 "zone_append": false, 00:28:45.398 "compare": false, 00:28:45.398 "compare_and_write": false, 00:28:45.398 "abort": false, 00:28:45.398 "seek_hole": true, 00:28:45.398 "seek_data": true, 00:28:45.398 "copy": false, 00:28:45.398 "nvme_iov_md": false 00:28:45.398 }, 00:28:45.398 "driver_specific": { 00:28:45.398 "lvol": { 00:28:45.398 "lvol_store_uuid": "ceb61a22-c35b-4184-9b10-796626f5893f", 00:28:45.398 "base_bdev": "aio_bdev", 00:28:45.398 "thin_provision": false, 00:28:45.398 "num_allocated_clusters": 38, 00:28:45.398 "snapshot": false, 00:28:45.398 "clone": false, 00:28:45.398 "esnap_clone": false 00:28:45.398 } 00:28:45.398 } 00:28:45.398 } 00:28:45.398 ] 00:28:45.398 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:28:45.398 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:45.398 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:45.656 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:45.657 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:45.657 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:45.914 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:45.914 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ad5353d7-7884-45cc-bdf6-de04e9b34a11 00:28:46.171 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ceb61a22-c35b-4184-9b10-796626f5893f 00:28:46.429 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:46.688 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:46.688 00:28:46.688 real 0m17.833s 00:28:46.688 user 0m16.799s 00:28:46.688 sys 0m2.085s 00:28:46.688 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:46.688 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:46.688 ************************************ 00:28:46.688 END TEST lvs_grow_clean 00:28:46.688 ************************************ 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:46.947 ************************************ 00:28:46.947 START TEST lvs_grow_dirty 00:28:46.947 ************************************ 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:46.947 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:47.205 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:47.205 12:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:47.462 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:28:47.462 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:28:47.462 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:47.719 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:47.720 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:47.720 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 lvol 150 00:28:47.977 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1942ab61-1025-4cce-83f5-4a5f49321174 00:28:47.977 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:47.977 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:48.236 [2024-10-30 12:40:20.866338] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:48.236 [2024-10-30 12:40:20.866437] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:48.236 true 00:28:48.236 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:28:48.236 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:48.494 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:48.494 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:49.060 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1942ab61-1025-4cce-83f5-4a5f49321174 00:28:49.061 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:49.319 [2024-10-30 12:40:21.978642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.319 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:49.885 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=749900 00:28:49.885 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:49.885 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:49.885 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 749900 /var/tmp/bdevperf.sock 00:28:49.885 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 749900 ']' 00:28:49.885 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:49.885 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:49.885 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:49.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:49.885 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:49.885 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:49.885 [2024-10-30 12:40:22.326619] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:28:49.885 [2024-10-30 12:40:22.326706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749900 ] 00:28:49.885 [2024-10-30 12:40:22.401447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.885 [2024-10-30 12:40:22.465056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.142 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:50.142 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:28:50.142 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:50.400 Nvme0n1 00:28:50.400 12:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:50.657 [ 00:28:50.657 { 00:28:50.657 "name": "Nvme0n1", 00:28:50.657 "aliases": [ 00:28:50.657 "1942ab61-1025-4cce-83f5-4a5f49321174" 00:28:50.657 ], 00:28:50.657 "product_name": "NVMe disk", 00:28:50.657 "block_size": 4096, 00:28:50.657 "num_blocks": 38912, 00:28:50.657 "uuid": "1942ab61-1025-4cce-83f5-4a5f49321174", 00:28:50.657 "numa_id": 0, 00:28:50.657 "assigned_rate_limits": { 00:28:50.657 "rw_ios_per_sec": 0, 00:28:50.657 "rw_mbytes_per_sec": 0, 00:28:50.657 "r_mbytes_per_sec": 0, 00:28:50.657 "w_mbytes_per_sec": 0 00:28:50.657 }, 00:28:50.657 "claimed": false, 00:28:50.657 "zoned": false, 00:28:50.657 "supported_io_types": { 00:28:50.657 "read": true, 00:28:50.657 "write": true, 00:28:50.657 "unmap": true, 00:28:50.657 "flush": true, 00:28:50.657 "reset": true, 00:28:50.657 "nvme_admin": true, 00:28:50.657 "nvme_io": true, 00:28:50.657 "nvme_io_md": false, 00:28:50.657 "write_zeroes": true, 00:28:50.657 "zcopy": false, 00:28:50.657 "get_zone_info": false, 00:28:50.657 "zone_management": false, 00:28:50.657 "zone_append": false, 00:28:50.657 "compare": true, 00:28:50.657 "compare_and_write": true, 00:28:50.657 "abort": true, 00:28:50.657 "seek_hole": false, 00:28:50.657 "seek_data": false, 00:28:50.657 "copy": true, 00:28:50.657 "nvme_iov_md": false 00:28:50.657 }, 00:28:50.657 "memory_domains": [ 00:28:50.657 { 00:28:50.657 "dma_device_id": "system", 00:28:50.657 "dma_device_type": 1 00:28:50.657 } 00:28:50.657 ], 00:28:50.657 "driver_specific": { 00:28:50.657 "nvme": [ 00:28:50.657 { 00:28:50.657 "trid": { 00:28:50.657 "trtype": "TCP", 00:28:50.657 "adrfam": "IPv4", 00:28:50.657 "traddr": "10.0.0.2", 00:28:50.657 "trsvcid": "4420", 00:28:50.657 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:50.657 }, 00:28:50.657 "ctrlr_data": { 00:28:50.657 "cntlid": 1, 00:28:50.657 "vendor_id": "0x8086", 00:28:50.657 "model_number": "SPDK bdev Controller", 00:28:50.657 "serial_number": "SPDK0", 00:28:50.657 "firmware_revision": "25.01", 00:28:50.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.657 "oacs": { 00:28:50.657 "security": 0, 00:28:50.657 "format": 0, 00:28:50.657 "firmware": 0, 00:28:50.657 "ns_manage": 0 00:28:50.657 }, 00:28:50.657 "multi_ctrlr": true, 00:28:50.657 "ana_reporting": false 00:28:50.657 }, 00:28:50.657 "vs": { 00:28:50.657 "nvme_version": "1.3" 00:28:50.657 }, 00:28:50.657 "ns_data": { 00:28:50.657 "id": 1, 00:28:50.657 "can_share": true 00:28:50.657 } 00:28:50.657 } 00:28:50.657 ], 00:28:50.657 "mp_policy": "active_passive" 00:28:50.657 } 00:28:50.657 } 00:28:50.657 ] 00:28:50.657 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=750033 00:28:50.657 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:50.657 12:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:50.916 Running I/O for 10 seconds... 00:28:51.849 Latency(us) 00:28:51.849 [2024-10-30T11:40:24.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.849 Nvme0n1 : 1.00 14631.00 57.15 0.00 0.00 0.00 0.00 0.00 00:28:51.849 [2024-10-30T11:40:24.530Z] =================================================================================================================== 00:28:51.849 [2024-10-30T11:40:24.530Z] Total : 14631.00 57.15 0.00 0.00 0.00 0.00 0.00 00:28:51.849 00:28:52.781 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:28:52.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.781 Nvme0n1 : 2.00 14729.50 57.54 0.00 0.00 0.00 0.00 0.00 00:28:52.781 [2024-10-30T11:40:25.462Z] =================================================================================================================== 00:28:52.781 [2024-10-30T11:40:25.462Z] Total : 14729.50 57.54 0.00 0.00 0.00 0.00 0.00 00:28:52.781 00:28:53.039 true 00:28:53.039 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:28:53.039 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:53.297 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:53.297 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:53.297 12:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 750033 00:28:53.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:53.862 Nvme0n1 : 3.00 14844.67 57.99 0.00 0.00 0.00 0.00 0.00 00:28:53.862 [2024-10-30T11:40:26.543Z] =================================================================================================================== 00:28:53.862 [2024-10-30T11:40:26.543Z] Total : 14844.67 57.99 0.00 0.00 0.00 0.00 0.00 00:28:53.862 00:28:54.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.795 Nvme0n1 : 4.00 14888.25 58.16 0.00 0.00 0.00 0.00 0.00 00:28:54.795 [2024-10-30T11:40:27.476Z] =================================================================================================================== 00:28:54.795 [2024-10-30T11:40:27.476Z] Total : 14888.25 58.16 0.00 0.00 0.00 0.00 0.00 00:28:54.795 00:28:55.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:55.729 Nvme0n1 : 5.00 14952.80 58.41 0.00 0.00 0.00 0.00 0.00 00:28:55.729 [2024-10-30T11:40:28.410Z] =================================================================================================================== 00:28:55.729 [2024-10-30T11:40:28.410Z] Total : 14952.80 58.41 0.00 0.00 0.00 0.00 0.00 00:28:55.729 00:28:57.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:57.102 Nvme0n1 : 6.00 14973.67 58.49 0.00 0.00 0.00 0.00 0.00 00:28:57.102 [2024-10-30T11:40:29.783Z] =================================================================================================================== 00:28:57.102 [2024-10-30T11:40:29.783Z] Total : 14973.67 58.49 0.00 0.00 0.00 0.00 0.00 00:28:57.102 00:28:58.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:58.037 Nvme0n1 : 7.00 15024.86 58.69 0.00 0.00 0.00 0.00 0.00 00:28:58.037 [2024-10-30T11:40:30.718Z] =================================================================================================================== 00:28:58.037 [2024-10-30T11:40:30.718Z] Total : 15024.86 58.69 0.00 0.00 0.00 0.00 0.00 00:28:58.037 00:28:58.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:58.970 Nvme0n1 : 8.00 15038.88 58.75 0.00 0.00 0.00 0.00 0.00 00:28:58.970 [2024-10-30T11:40:31.651Z] =================================================================================================================== 00:28:58.970 [2024-10-30T11:40:31.651Z] Total : 15038.88 58.75 0.00 0.00 0.00 0.00 0.00 00:28:58.970 00:28:59.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:59.901 Nvme0n1 : 9.00 15074.00 58.88 0.00 0.00 0.00 0.00 0.00 00:28:59.901 [2024-10-30T11:40:32.582Z] =================================================================================================================== 00:28:59.901 [2024-10-30T11:40:32.582Z] Total : 15074.00 58.88 0.00 0.00 0.00 0.00 0.00 00:28:59.901 00:29:00.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:00.835 Nvme0n1 : 10.00 15118.60 59.06 0.00 0.00 0.00 0.00 0.00 00:29:00.835 [2024-10-30T11:40:33.516Z] =================================================================================================================== 00:29:00.835 [2024-10-30T11:40:33.516Z] Total : 15118.60 59.06 0.00 0.00 0.00 0.00 0.00 00:29:00.835 00:29:00.835 00:29:00.835 Latency(us) 00:29:00.835 [2024-10-30T11:40:33.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:00.835 Nvme0n1 : 10.00 15119.30 59.06 0.00 0.00 8460.88 4320.52 18155.90 00:29:00.835 [2024-10-30T11:40:33.516Z] =================================================================================================================== 00:29:00.835 [2024-10-30T11:40:33.516Z] Total : 15119.30 59.06 0.00 0.00 8460.88 4320.52 18155.90 00:29:00.835 { 00:29:00.835 "results": [ 00:29:00.835 { 00:29:00.835 "job": "Nvme0n1", 00:29:00.835 "core_mask": "0x2", 00:29:00.835 "workload": "randwrite", 00:29:00.835 "status": "finished", 00:29:00.835 "queue_depth": 128, 00:29:00.835 "io_size": 4096, 00:29:00.835 "runtime": 10.003771, 00:29:00.835 "iops": 15119.298512530924, 00:29:00.835 "mibps": 59.05975981457392, 00:29:00.835 "io_failed": 0, 00:29:00.835 "io_timeout": 0, 00:29:00.835 "avg_latency_us": 8460.882392663605, 00:29:00.835 "min_latency_us": 4320.521481481482, 00:29:00.835 "max_latency_us": 18155.89925925926 00:29:00.835 } 00:29:00.835 ], 00:29:00.835 "core_count": 1 00:29:00.835 } 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 749900 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 749900 ']' 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 749900 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 749900 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 749900' 00:29:00.835 killing process with pid 749900 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 749900 00:29:00.835 Received shutdown signal, test time was about 10.000000 seconds 00:29:00.835 00:29:00.835 Latency(us) 00:29:00.835 [2024-10-30T11:40:33.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.835 [2024-10-30T11:40:33.516Z] =================================================================================================================== 00:29:00.835 [2024-10-30T11:40:33.516Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.835 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 749900 00:29:01.093 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:01.389 12:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:01.694 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:29:01.694 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 747400 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 747400 00:29:01.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 747400 Killed "${NVMF_APP[@]}" "$@" 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=751351 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 751351 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 751351 ']' 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:01.961 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:01.961 [2024-10-30 12:40:34.552670] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:01.961 [2024-10-30 12:40:34.553736] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:01.961 [2024-10-30 12:40:34.553803] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.961 [2024-10-30 12:40:34.626013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.221 [2024-10-30 12:40:34.682828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.221 [2024-10-30 12:40:34.682887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.221 [2024-10-30 12:40:34.682915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.221 [2024-10-30 12:40:34.682926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.221 [2024-10-30 12:40:34.682936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.221 [2024-10-30 12:40:34.683503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.221 [2024-10-30 12:40:34.769731] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:02.221 [2024-10-30 12:40:34.770022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:02.221 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:02.221 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:29:02.221 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:02.221 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:02.221 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:02.221 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.221 12:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:02.480 [2024-10-30 12:40:35.118337] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:02.480 [2024-10-30 12:40:35.118480] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:02.480 [2024-10-30 12:40:35.118532] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:02.480 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:02.480 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1942ab61-1025-4cce-83f5-4a5f49321174 00:29:02.480 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=1942ab61-1025-4cce-83f5-4a5f49321174 00:29:02.480 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:02.480 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:29:02.480 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:02.480 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:02.480 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:02.740 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1942ab61-1025-4cce-83f5-4a5f49321174 -t 2000 00:29:03.000 [ 00:29:03.000 { 00:29:03.000 "name": "1942ab61-1025-4cce-83f5-4a5f49321174", 00:29:03.000 "aliases": [ 00:29:03.000 "lvs/lvol" 00:29:03.000 ], 00:29:03.000 "product_name": "Logical Volume", 00:29:03.000 "block_size": 4096, 00:29:03.000 "num_blocks": 38912, 00:29:03.000 "uuid": "1942ab61-1025-4cce-83f5-4a5f49321174", 00:29:03.000 "assigned_rate_limits": { 00:29:03.000 "rw_ios_per_sec": 0, 00:29:03.000 "rw_mbytes_per_sec": 0, 00:29:03.000 "r_mbytes_per_sec": 0, 00:29:03.000 "w_mbytes_per_sec": 0 00:29:03.000 }, 00:29:03.000 "claimed": false, 00:29:03.000 "zoned": false, 00:29:03.000 "supported_io_types": { 00:29:03.000 "read": true, 00:29:03.000 "write": true, 00:29:03.000 "unmap": true, 00:29:03.000 "flush": false, 00:29:03.000 "reset": true, 00:29:03.000 "nvme_admin": false, 00:29:03.000 "nvme_io": false, 00:29:03.000 "nvme_io_md": false, 00:29:03.000 "write_zeroes": true, 00:29:03.000 "zcopy": false, 00:29:03.000 "get_zone_info": false, 00:29:03.000 "zone_management": false, 00:29:03.000 "zone_append": false, 00:29:03.000 "compare": false, 00:29:03.000 "compare_and_write": false, 00:29:03.000 "abort": false, 00:29:03.000 "seek_hole": true, 00:29:03.000 "seek_data": true, 00:29:03.000 "copy": false, 00:29:03.000 "nvme_iov_md": false 00:29:03.000 }, 00:29:03.000 "driver_specific": { 00:29:03.001 "lvol": { 00:29:03.001 "lvol_store_uuid": "9a49dd05-5f5a-444c-bdf9-8ae63a449189", 00:29:03.001 "base_bdev": "aio_bdev", 00:29:03.001 "thin_provision": false, 00:29:03.001 "num_allocated_clusters": 38, 00:29:03.001 "snapshot": false, 00:29:03.001 "clone": false, 00:29:03.001 "esnap_clone": false 00:29:03.001 } 00:29:03.001 } 00:29:03.001 } 00:29:03.001 ] 00:29:03.001 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:29:03.001 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:29:03.001 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:03.570 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:03.570 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:29:03.570 12:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:03.570 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:03.570 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:03.828 [2024-10-30 12:40:36.480044] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:03.828 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:29:03.828 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:29:03.828 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:29:03.828 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:03.828 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.828 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:04.087 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:04.087 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:04.087 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:04.087 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:04.087 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:04.087 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:29:04.344 request: 00:29:04.344 { 00:29:04.344 "uuid": "9a49dd05-5f5a-444c-bdf9-8ae63a449189", 00:29:04.344 "method": "bdev_lvol_get_lvstores", 00:29:04.344 "req_id": 1 00:29:04.344 } 00:29:04.344 Got JSON-RPC error response 00:29:04.344 response: 00:29:04.344 { 00:29:04.344 "code": -19, 00:29:04.344 "message": "No such device" 00:29:04.344 } 00:29:04.344 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:29:04.344 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:04.344 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:04.344 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:04.344 12:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:04.602 aio_bdev 00:29:04.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1942ab61-1025-4cce-83f5-4a5f49321174 00:29:04.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=1942ab61-1025-4cce-83f5-4a5f49321174 00:29:04.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:04.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:29:04.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:04.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:04.602 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:04.862 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1942ab61-1025-4cce-83f5-4a5f49321174 -t 2000 00:29:05.121 [ 00:29:05.121 { 00:29:05.121 "name": "1942ab61-1025-4cce-83f5-4a5f49321174", 00:29:05.121 "aliases": [ 00:29:05.121 "lvs/lvol" 00:29:05.121 ], 00:29:05.121 "product_name": "Logical Volume", 00:29:05.121 "block_size": 4096, 00:29:05.121 "num_blocks": 38912, 00:29:05.121 "uuid": "1942ab61-1025-4cce-83f5-4a5f49321174", 00:29:05.121 "assigned_rate_limits": { 00:29:05.121 "rw_ios_per_sec": 0, 00:29:05.121 "rw_mbytes_per_sec": 0, 00:29:05.121 "r_mbytes_per_sec": 0, 00:29:05.121 "w_mbytes_per_sec": 0 00:29:05.121 }, 00:29:05.121 "claimed": false, 00:29:05.121 "zoned": false, 00:29:05.121 "supported_io_types": { 00:29:05.121 "read": true, 00:29:05.121 "write": true, 00:29:05.121 "unmap": true, 00:29:05.121 "flush": false, 00:29:05.121 "reset": true, 00:29:05.121 "nvme_admin": false, 00:29:05.121 "nvme_io": false, 00:29:05.121 "nvme_io_md": false, 00:29:05.121 "write_zeroes": true, 00:29:05.121 "zcopy": false, 00:29:05.121 "get_zone_info": false, 00:29:05.121 "zone_management": false, 00:29:05.121 "zone_append": false, 00:29:05.121 "compare": false, 00:29:05.121 "compare_and_write": false, 00:29:05.121 "abort": false, 00:29:05.121 "seek_hole": true, 00:29:05.121 "seek_data": true, 00:29:05.121 "copy": false, 00:29:05.121 "nvme_iov_md": false 00:29:05.121 }, 00:29:05.121 "driver_specific": { 00:29:05.121 "lvol": { 00:29:05.121 "lvol_store_uuid": "9a49dd05-5f5a-444c-bdf9-8ae63a449189", 00:29:05.121 "base_bdev": "aio_bdev", 00:29:05.121 "thin_provision": false, 00:29:05.121 "num_allocated_clusters": 38, 00:29:05.121 "snapshot": false, 00:29:05.121 "clone": false, 00:29:05.121 "esnap_clone": false 00:29:05.121 } 00:29:05.121 } 00:29:05.121 } 00:29:05.121 ] 00:29:05.121 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:29:05.121 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:29:05.121 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:05.382 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:05.382 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:29:05.382 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:05.641 12:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:05.641 12:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1942ab61-1025-4cce-83f5-4a5f49321174 00:29:05.900 12:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9a49dd05-5f5a-444c-bdf9-8ae63a449189 00:29:06.157 12:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:06.415 12:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:06.415 00:29:06.415 real 0m19.592s 00:29:06.415 user 0m36.641s 00:29:06.415 sys 0m4.745s 00:29:06.415 12:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:06.415 12:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:06.415 ************************************ 00:29:06.415 END TEST lvs_grow_dirty 00:29:06.415 ************************************ 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:06.415 nvmf_trace.0 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:06.415 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:06.415 rmmod nvme_tcp 00:29:06.415 rmmod nvme_fabrics 00:29:06.415 rmmod nvme_keyring 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 751351 ']' 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 751351 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 751351 ']' 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 751351 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 751351 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 751351' 00:29:06.673 killing process with pid 751351 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 751351 00:29:06.673 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 751351 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.933 12:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.838 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:08.838 00:29:08.838 real 0m42.971s 00:29:08.838 user 0m55.203s 00:29:08.838 sys 0m8.854s 00:29:08.838 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:08.838 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:08.838 ************************************ 00:29:08.838 END TEST nvmf_lvs_grow 00:29:08.838 ************************************ 00:29:08.838 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:08.838 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:08.838 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:08.838 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:08.838 ************************************ 00:29:08.838 START TEST nvmf_bdev_io_wait 00:29:08.838 ************************************ 00:29:08.839 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:08.839 * Looking for test storage... 00:29:08.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:08.839 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:08.839 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:29:08.839 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:09.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.097 --rc genhtml_branch_coverage=1 00:29:09.097 --rc genhtml_function_coverage=1 00:29:09.097 --rc genhtml_legend=1 00:29:09.097 --rc geninfo_all_blocks=1 00:29:09.097 --rc geninfo_unexecuted_blocks=1 00:29:09.097 00:29:09.097 ' 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:09.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.097 --rc genhtml_branch_coverage=1 00:29:09.097 --rc genhtml_function_coverage=1 00:29:09.097 --rc genhtml_legend=1 00:29:09.097 --rc geninfo_all_blocks=1 00:29:09.097 --rc geninfo_unexecuted_blocks=1 00:29:09.097 00:29:09.097 ' 00:29:09.097 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:09.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.097 --rc genhtml_branch_coverage=1 00:29:09.097 --rc genhtml_function_coverage=1 00:29:09.097 --rc genhtml_legend=1 00:29:09.098 --rc geninfo_all_blocks=1 00:29:09.098 --rc geninfo_unexecuted_blocks=1 00:29:09.098 00:29:09.098 ' 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:09.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.098 --rc genhtml_branch_coverage=1 00:29:09.098 --rc genhtml_function_coverage=1 00:29:09.098 --rc genhtml_legend=1 00:29:09.098 --rc geninfo_all_blocks=1 00:29:09.098 --rc geninfo_unexecuted_blocks=1 00:29:09.098 00:29:09.098 ' 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.098 12:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:11.632 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:11.632 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:11.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:11.632 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.632 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:29:11.633 00:29:11.633 --- 10.0.0.2 ping statistics --- 00:29:11.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.633 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:29:11.633 00:29:11.633 --- 10.0.0.1 ping statistics --- 00:29:11.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.633 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=753879 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 753879 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 753879 ']' 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:11.633 12:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.633 [2024-10-30 12:40:44.035601] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:11.633 [2024-10-30 12:40:44.036882] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:11.633 [2024-10-30 12:40:44.036940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.633 [2024-10-30 12:40:44.116148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.633 [2024-10-30 12:40:44.177157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.633 [2024-10-30 12:40:44.177224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.633 [2024-10-30 12:40:44.177252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.633 [2024-10-30 12:40:44.177274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.633 [2024-10-30 12:40:44.177284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.633 [2024-10-30 12:40:44.178920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.633 [2024-10-30 12:40:44.178945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.633 [2024-10-30 12:40:44.179006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.633 [2024-10-30 12:40:44.179010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.633 [2024-10-30 12:40:44.179569] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.633 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.893 [2024-10-30 12:40:44.365358] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:11.893 [2024-10-30 12:40:44.365615] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:11.893 [2024-10-30 12:40:44.366449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:11.893 [2024-10-30 12:40:44.367227] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.893 [2024-10-30 12:40:44.371778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.893 Malloc0 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:11.893 [2024-10-30 12:40:44.431935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=754030 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=754032 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.893 { 00:29:11.893 "params": { 00:29:11.893 "name": "Nvme$subsystem", 00:29:11.893 "trtype": "$TEST_TRANSPORT", 00:29:11.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.893 "adrfam": "ipv4", 00:29:11.893 "trsvcid": "$NVMF_PORT", 00:29:11.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.893 "hdgst": ${hdgst:-false}, 00:29:11.893 "ddgst": ${ddgst:-false} 00:29:11.893 }, 00:29:11.893 "method": "bdev_nvme_attach_controller" 00:29:11.893 } 00:29:11.893 EOF 00:29:11.893 )") 00:29:11.893 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=754034 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.894 { 00:29:11.894 "params": { 00:29:11.894 "name": "Nvme$subsystem", 00:29:11.894 "trtype": "$TEST_TRANSPORT", 00:29:11.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.894 "adrfam": "ipv4", 00:29:11.894 "trsvcid": "$NVMF_PORT", 00:29:11.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.894 "hdgst": ${hdgst:-false}, 00:29:11.894 "ddgst": ${ddgst:-false} 00:29:11.894 }, 00:29:11.894 "method": "bdev_nvme_attach_controller" 00:29:11.894 } 00:29:11.894 EOF 00:29:11.894 )") 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=754037 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.894 { 00:29:11.894 "params": { 00:29:11.894 "name": "Nvme$subsystem", 00:29:11.894 "trtype": "$TEST_TRANSPORT", 00:29:11.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.894 "adrfam": "ipv4", 00:29:11.894 "trsvcid": "$NVMF_PORT", 00:29:11.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.894 "hdgst": ${hdgst:-false}, 00:29:11.894 "ddgst": ${ddgst:-false} 00:29:11.894 }, 00:29:11.894 "method": "bdev_nvme_attach_controller" 00:29:11.894 } 00:29:11.894 EOF 00:29:11.894 )") 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.894 { 00:29:11.894 "params": { 00:29:11.894 "name": "Nvme$subsystem", 00:29:11.894 "trtype": "$TEST_TRANSPORT", 00:29:11.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.894 "adrfam": "ipv4", 00:29:11.894 "trsvcid": "$NVMF_PORT", 00:29:11.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.894 "hdgst": ${hdgst:-false}, 00:29:11.894 "ddgst": ${ddgst:-false} 00:29:11.894 }, 00:29:11.894 "method": "bdev_nvme_attach_controller" 00:29:11.894 } 00:29:11.894 EOF 00:29:11.894 )") 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 754030 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:11.894 "params": { 00:29:11.894 "name": "Nvme1", 00:29:11.894 "trtype": "tcp", 00:29:11.894 "traddr": "10.0.0.2", 00:29:11.894 "adrfam": "ipv4", 00:29:11.894 "trsvcid": "4420", 00:29:11.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.894 "hdgst": false, 00:29:11.894 "ddgst": false 00:29:11.894 }, 00:29:11.894 "method": "bdev_nvme_attach_controller" 00:29:11.894 }' 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:11.894 "params": { 00:29:11.894 "name": "Nvme1", 00:29:11.894 "trtype": "tcp", 00:29:11.894 "traddr": "10.0.0.2", 00:29:11.894 "adrfam": "ipv4", 00:29:11.894 "trsvcid": "4420", 00:29:11.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.894 "hdgst": false, 00:29:11.894 "ddgst": false 00:29:11.894 }, 00:29:11.894 "method": "bdev_nvme_attach_controller" 00:29:11.894 }' 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:11.894 "params": { 00:29:11.894 "name": "Nvme1", 00:29:11.894 "trtype": "tcp", 00:29:11.894 "traddr": "10.0.0.2", 00:29:11.894 "adrfam": "ipv4", 00:29:11.894 "trsvcid": "4420", 00:29:11.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.894 "hdgst": false, 00:29:11.894 "ddgst": false 00:29:11.894 }, 00:29:11.894 "method": "bdev_nvme_attach_controller" 00:29:11.894 }' 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:11.894 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:11.894 "params": { 00:29:11.894 "name": "Nvme1", 00:29:11.894 "trtype": "tcp", 00:29:11.894 "traddr": "10.0.0.2", 00:29:11.894 "adrfam": "ipv4", 00:29:11.894 "trsvcid": "4420", 00:29:11.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.894 "hdgst": false, 00:29:11.894 "ddgst": false 00:29:11.894 }, 00:29:11.894 "method": "bdev_nvme_attach_controller" 00:29:11.894 }' 00:29:11.894 [2024-10-30 12:40:44.481955] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:11.894 [2024-10-30 12:40:44.481955] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:11.894 [2024-10-30 12:40:44.482038] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-30 12:40:44.482038] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:11.894 --proc-type=auto ] 00:29:11.894 [2024-10-30 12:40:44.482279] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:11.894 [2024-10-30 12:40:44.482278] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:11.894 [2024-10-30 12:40:44.482354] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-30 12:40:44.482354] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:11.894 --proc-type=auto ] 00:29:12.152 [2024-10-30 12:40:44.667506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.152 [2024-10-30 12:40:44.721114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:12.152 [2024-10-30 12:40:44.764599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.152 [2024-10-30 12:40:44.815479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:12.152 [2024-10-30 12:40:44.830817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.411 [2024-10-30 12:40:44.881007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:12.411 [2024-10-30 12:40:44.898058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.411 [2024-10-30 12:40:44.947695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:12.411 Running I/O for 1 seconds... 00:29:12.411 Running I/O for 1 seconds... 00:29:12.411 Running I/O for 1 seconds... 00:29:12.669 Running I/O for 1 seconds... 00:29:13.598 9811.00 IOPS, 38.32 MiB/s 00:29:13.598 Latency(us) 00:29:13.598 [2024-10-30T11:40:46.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.598 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:13.598 Nvme1n1 : 1.01 9863.51 38.53 0.00 0.00 12919.58 4805.97 15534.46 00:29:13.598 [2024-10-30T11:40:46.279Z] =================================================================================================================== 00:29:13.598 [2024-10-30T11:40:46.279Z] Total : 9863.51 38.53 0.00 0.00 12919.58 4805.97 15534.46 00:29:13.598 4982.00 IOPS, 19.46 MiB/s 00:29:13.598 Latency(us) 00:29:13.598 [2024-10-30T11:40:46.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.598 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:13.598 Nvme1n1 : 1.02 5009.77 19.57 0.00 0.00 25206.41 2512.21 38059.43 00:29:13.598 [2024-10-30T11:40:46.279Z] =================================================================================================================== 00:29:13.598 [2024-10-30T11:40:46.279Z] Total : 5009.77 19.57 0.00 0.00 25206.41 2512.21 38059.43 00:29:13.598 196112.00 IOPS, 766.06 MiB/s 00:29:13.598 Latency(us) 00:29:13.598 [2024-10-30T11:40:46.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.598 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:13.598 Nvme1n1 : 1.00 195741.63 764.62 0.00 0.00 650.43 291.27 1868.99 00:29:13.598 [2024-10-30T11:40:46.279Z] =================================================================================================================== 00:29:13.598 [2024-10-30T11:40:46.279Z] Total : 195741.63 764.62 0.00 0.00 650.43 291.27 1868.99 00:29:13.598 4953.00 IOPS, 19.35 MiB/s 00:29:13.598 Latency(us) 00:29:13.598 [2024-10-30T11:40:46.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.598 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:13.598 Nvme1n1 : 1.01 5035.44 19.67 0.00 0.00 25292.95 8107.05 47768.46 00:29:13.598 [2024-10-30T11:40:46.279Z] =================================================================================================================== 00:29:13.598 [2024-10-30T11:40:46.279Z] Total : 5035.44 19.67 0.00 0.00 25292.95 8107.05 47768.46 00:29:13.598 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 754032 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 754034 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 754037 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.855 rmmod nvme_tcp 00:29:13.855 rmmod nvme_fabrics 00:29:13.855 rmmod nvme_keyring 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 753879 ']' 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 753879 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 753879 ']' 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 753879 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 753879 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 753879' 00:29:13.855 killing process with pid 753879 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 753879 00:29:13.855 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 753879 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.113 12:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.012 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.012 00:29:16.012 real 0m7.208s 00:29:16.012 user 0m13.754s 00:29:16.012 sys 0m3.982s 00:29:16.012 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:16.012 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:16.012 ************************************ 00:29:16.012 END TEST nvmf_bdev_io_wait 00:29:16.012 ************************************ 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:16.272 ************************************ 00:29:16.272 START TEST nvmf_queue_depth 00:29:16.272 ************************************ 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:16.272 * Looking for test storage... 00:29:16.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:16.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.272 --rc genhtml_branch_coverage=1 00:29:16.272 --rc genhtml_function_coverage=1 00:29:16.272 --rc genhtml_legend=1 00:29:16.272 --rc geninfo_all_blocks=1 00:29:16.272 --rc geninfo_unexecuted_blocks=1 00:29:16.272 00:29:16.272 ' 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:16.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.272 --rc genhtml_branch_coverage=1 00:29:16.272 --rc genhtml_function_coverage=1 00:29:16.272 --rc genhtml_legend=1 00:29:16.272 --rc geninfo_all_blocks=1 00:29:16.272 --rc geninfo_unexecuted_blocks=1 00:29:16.272 00:29:16.272 ' 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:16.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.272 --rc genhtml_branch_coverage=1 00:29:16.272 --rc genhtml_function_coverage=1 00:29:16.272 --rc genhtml_legend=1 00:29:16.272 --rc geninfo_all_blocks=1 00:29:16.272 --rc geninfo_unexecuted_blocks=1 00:29:16.272 00:29:16.272 ' 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:16.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.272 --rc genhtml_branch_coverage=1 00:29:16.272 --rc genhtml_function_coverage=1 00:29:16.272 --rc genhtml_legend=1 00:29:16.272 --rc geninfo_all_blocks=1 00:29:16.272 --rc geninfo_unexecuted_blocks=1 00:29:16.272 00:29:16.272 ' 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.272 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.273 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:18.807 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.807 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:18.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:18.808 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:18.808 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.808 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:29:18.808 00:29:18.808 --- 10.0.0.2 ping statistics --- 00:29:18.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.808 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:29:18.808 00:29:18.808 --- 10.0.0.1 ping statistics --- 00:29:18.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.808 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=756187 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 756187 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 756187 ']' 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:18.808 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.808 [2024-10-30 12:40:51.166556] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:18.808 [2024-10-30 12:40:51.167644] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:18.808 [2024-10-30 12:40:51.167716] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.808 [2024-10-30 12:40:51.243530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.808 [2024-10-30 12:40:51.299454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.808 [2024-10-30 12:40:51.299510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.808 [2024-10-30 12:40:51.299532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.808 [2024-10-30 12:40:51.299543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.808 [2024-10-30 12:40:51.299553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.808 [2024-10-30 12:40:51.300114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.808 [2024-10-30 12:40:51.384548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:18.809 [2024-10-30 12:40:51.384833] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.809 [2024-10-30 12:40:51.436677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.809 Malloc0 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.809 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:19.067 [2024-10-30 12:40:51.500811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=756273 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 756273 /var/tmp/bdevperf.sock 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 756273 ']' 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:19.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:19.067 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:19.067 [2024-10-30 12:40:51.547169] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:19.067 [2024-10-30 12:40:51.547231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756273 ] 00:29:19.067 [2024-10-30 12:40:51.611143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.067 [2024-10-30 12:40:51.667451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.325 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:19.325 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:29:19.325 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.325 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.325 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:19.325 NVMe0n1 00:29:19.325 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.325 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:19.583 Running I/O for 10 seconds... 00:29:21.449 8192.00 IOPS, 32.00 MiB/s [2024-10-30T11:40:55.498Z] 8491.50 IOPS, 33.17 MiB/s [2024-10-30T11:40:56.431Z] 8538.33 IOPS, 33.35 MiB/s [2024-10-30T11:40:57.364Z] 8697.50 IOPS, 33.97 MiB/s [2024-10-30T11:40:58.296Z] 8631.80 IOPS, 33.72 MiB/s [2024-10-30T11:40:59.228Z] 8700.33 IOPS, 33.99 MiB/s [2024-10-30T11:41:00.158Z] 8667.71 IOPS, 33.86 MiB/s [2024-10-30T11:41:01.529Z] 8704.38 IOPS, 34.00 MiB/s [2024-10-30T11:41:02.461Z] 8739.22 IOPS, 34.14 MiB/s [2024-10-30T11:41:02.461Z] 8710.60 IOPS, 34.03 MiB/s 00:29:29.780 Latency(us) 00:29:29.780 [2024-10-30T11:41:02.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.780 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:29.780 Verification LBA range: start 0x0 length 0x4000 00:29:29.780 NVMe0n1 : 10.07 8754.95 34.20 0.00 0.00 116488.34 8980.86 68351.62 00:29:29.780 [2024-10-30T11:41:02.461Z] =================================================================================================================== 00:29:29.780 [2024-10-30T11:41:02.461Z] Total : 8754.95 34.20 0.00 0.00 116488.34 8980.86 68351.62 00:29:29.780 { 00:29:29.780 "results": [ 00:29:29.780 { 00:29:29.780 "job": "NVMe0n1", 00:29:29.780 "core_mask": "0x1", 00:29:29.780 "workload": "verify", 00:29:29.780 "status": "finished", 00:29:29.780 "verify_range": { 00:29:29.780 "start": 0, 00:29:29.780 "length": 16384 00:29:29.780 }, 00:29:29.780 "queue_depth": 1024, 00:29:29.780 "io_size": 4096, 00:29:29.780 "runtime": 10.065389, 00:29:29.780 "iops": 8754.952242779687, 00:29:29.780 "mibps": 34.199032198358154, 00:29:29.780 "io_failed": 0, 00:29:29.780 "io_timeout": 0, 00:29:29.780 "avg_latency_us": 116488.34178420993, 00:29:29.780 "min_latency_us": 8980.85925925926, 00:29:29.780 "max_latency_us": 68351.62074074073 00:29:29.780 } 00:29:29.780 ], 00:29:29.780 "core_count": 1 00:29:29.780 } 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 756273 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 756273 ']' 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 756273 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 756273 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 756273' 00:29:29.780 killing process with pid 756273 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 756273 00:29:29.780 Received shutdown signal, test time was about 10.000000 seconds 00:29:29.780 00:29:29.780 Latency(us) 00:29:29.780 [2024-10-30T11:41:02.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.780 [2024-10-30T11:41:02.461Z] =================================================================================================================== 00:29:29.780 [2024-10-30T11:41:02.461Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 756273 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.780 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.780 rmmod nvme_tcp 00:29:29.780 rmmod nvme_fabrics 00:29:29.780 rmmod nvme_keyring 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 756187 ']' 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 756187 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 756187 ']' 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 756187 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 756187 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 756187' 00:29:30.037 killing process with pid 756187 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 756187 00:29:30.037 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 756187 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.296 12:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.200 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:32.200 00:29:32.200 real 0m16.086s 00:29:32.200 user 0m22.282s 00:29:32.200 sys 0m3.302s 00:29:32.200 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:32.200 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:32.200 ************************************ 00:29:32.200 END TEST nvmf_queue_depth 00:29:32.200 ************************************ 00:29:32.200 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:32.200 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:32.200 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:32.200 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:32.200 ************************************ 00:29:32.200 START TEST nvmf_target_multipath 00:29:32.200 ************************************ 00:29:32.200 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:32.459 * Looking for test storage... 00:29:32.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:32.459 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:32.459 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:29:32.459 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:32.459 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:32.459 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.459 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.459 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.459 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.459 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.459 12:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:32.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.459 --rc genhtml_branch_coverage=1 00:29:32.459 --rc genhtml_function_coverage=1 00:29:32.459 --rc genhtml_legend=1 00:29:32.459 --rc geninfo_all_blocks=1 00:29:32.459 --rc geninfo_unexecuted_blocks=1 00:29:32.459 00:29:32.459 ' 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:32.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.459 --rc genhtml_branch_coverage=1 00:29:32.459 --rc genhtml_function_coverage=1 00:29:32.459 --rc genhtml_legend=1 00:29:32.459 --rc geninfo_all_blocks=1 00:29:32.459 --rc geninfo_unexecuted_blocks=1 00:29:32.459 00:29:32.459 ' 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:32.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.459 --rc genhtml_branch_coverage=1 00:29:32.459 --rc genhtml_function_coverage=1 00:29:32.459 --rc genhtml_legend=1 00:29:32.459 --rc geninfo_all_blocks=1 00:29:32.459 --rc geninfo_unexecuted_blocks=1 00:29:32.459 00:29:32.459 ' 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:32.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.459 --rc genhtml_branch_coverage=1 00:29:32.459 --rc genhtml_function_coverage=1 00:29:32.459 --rc genhtml_legend=1 00:29:32.459 --rc geninfo_all_blocks=1 00:29:32.459 --rc geninfo_unexecuted_blocks=1 00:29:32.459 00:29:32.459 ' 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.459 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.460 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:34.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:34.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:34.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:34.992 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.992 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:29:34.993 00:29:34.993 --- 10.0.0.2 ping statistics --- 00:29:34.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.993 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:29:34.993 00:29:34.993 --- 10.0.0.1 ping statistics --- 00:29:34.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.993 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:34.993 only one NIC for nvmf test 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:34.993 rmmod nvme_tcp 00:29:34.993 rmmod nvme_fabrics 00:29:34.993 rmmod nvme_keyring 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.993 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:36.899 00:29:36.899 real 0m4.635s 00:29:36.899 user 0m0.984s 00:29:36.899 sys 0m1.664s 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:36.899 ************************************ 00:29:36.899 END TEST nvmf_target_multipath 00:29:36.899 ************************************ 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:36.899 ************************************ 00:29:36.899 START TEST nvmf_zcopy 00:29:36.899 ************************************ 00:29:36.899 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:37.157 * Looking for test storage... 00:29:37.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:37.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.157 --rc genhtml_branch_coverage=1 00:29:37.157 --rc genhtml_function_coverage=1 00:29:37.157 --rc genhtml_legend=1 00:29:37.157 --rc geninfo_all_blocks=1 00:29:37.157 --rc geninfo_unexecuted_blocks=1 00:29:37.157 00:29:37.157 ' 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:37.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.157 --rc genhtml_branch_coverage=1 00:29:37.157 --rc genhtml_function_coverage=1 00:29:37.157 --rc genhtml_legend=1 00:29:37.157 --rc geninfo_all_blocks=1 00:29:37.157 --rc geninfo_unexecuted_blocks=1 00:29:37.157 00:29:37.157 ' 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:37.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.157 --rc genhtml_branch_coverage=1 00:29:37.157 --rc genhtml_function_coverage=1 00:29:37.157 --rc genhtml_legend=1 00:29:37.157 --rc geninfo_all_blocks=1 00:29:37.157 --rc geninfo_unexecuted_blocks=1 00:29:37.157 00:29:37.157 ' 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:37.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.157 --rc genhtml_branch_coverage=1 00:29:37.157 --rc genhtml_function_coverage=1 00:29:37.157 --rc genhtml_legend=1 00:29:37.157 --rc geninfo_all_blocks=1 00:29:37.157 --rc geninfo_unexecuted_blocks=1 00:29:37.157 00:29:37.157 ' 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.157 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:37.158 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:39.744 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:39.744 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:39.744 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:39.744 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:39.744 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:39.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:39.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:29:39.745 00:29:39.745 --- 10.0.0.2 ping statistics --- 00:29:39.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.745 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:39.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:39.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:29:39.745 00:29:39.745 --- 10.0.0.1 ping statistics --- 00:29:39.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.745 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:39.745 12:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=761451 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 761451 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 761451 ']' 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.745 [2024-10-30 12:41:12.059806] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:39.745 [2024-10-30 12:41:12.060871] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:39.745 [2024-10-30 12:41:12.060923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.745 [2024-10-30 12:41:12.131049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.745 [2024-10-30 12:41:12.187847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.745 [2024-10-30 12:41:12.187901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.745 [2024-10-30 12:41:12.187924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.745 [2024-10-30 12:41:12.187935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.745 [2024-10-30 12:41:12.187946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.745 [2024-10-30 12:41:12.188524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.745 [2024-10-30 12:41:12.275497] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:39.745 [2024-10-30 12:41:12.275802] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.745 [2024-10-30 12:41:12.333087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.745 [2024-10-30 12:41:12.349282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.745 malloc0 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:39.745 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:39.745 { 00:29:39.745 "params": { 00:29:39.745 "name": "Nvme$subsystem", 00:29:39.745 "trtype": "$TEST_TRANSPORT", 00:29:39.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.745 "adrfam": "ipv4", 00:29:39.745 "trsvcid": "$NVMF_PORT", 00:29:39.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.746 "hdgst": ${hdgst:-false}, 00:29:39.746 "ddgst": ${ddgst:-false} 00:29:39.746 }, 00:29:39.746 "method": "bdev_nvme_attach_controller" 00:29:39.746 } 00:29:39.746 EOF 00:29:39.746 )") 00:29:39.746 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:39.746 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:39.746 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:39.746 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:39.746 "params": { 00:29:39.746 "name": "Nvme1", 00:29:39.746 "trtype": "tcp", 00:29:39.746 "traddr": "10.0.0.2", 00:29:39.746 "adrfam": "ipv4", 00:29:39.746 "trsvcid": "4420", 00:29:39.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:39.746 "hdgst": false, 00:29:39.746 "ddgst": false 00:29:39.746 }, 00:29:39.746 "method": "bdev_nvme_attach_controller" 00:29:39.746 }' 00:29:40.005 [2024-10-30 12:41:12.434201] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:40.005 [2024-10-30 12:41:12.434291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761471 ] 00:29:40.005 [2024-10-30 12:41:12.512427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.005 [2024-10-30 12:41:12.569148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.264 Running I/O for 10 seconds... 00:29:42.139 5557.00 IOPS, 43.41 MiB/s [2024-10-30T11:41:16.203Z] 5541.00 IOPS, 43.29 MiB/s [2024-10-30T11:41:17.140Z] 5580.00 IOPS, 43.59 MiB/s [2024-10-30T11:41:18.076Z] 5560.00 IOPS, 43.44 MiB/s [2024-10-30T11:41:19.012Z] 5567.20 IOPS, 43.49 MiB/s [2024-10-30T11:41:19.951Z] 5561.67 IOPS, 43.45 MiB/s [2024-10-30T11:41:20.889Z] 5567.71 IOPS, 43.50 MiB/s [2024-10-30T11:41:21.826Z] 5566.00 IOPS, 43.48 MiB/s [2024-10-30T11:41:23.207Z] 5562.22 IOPS, 43.45 MiB/s [2024-10-30T11:41:23.207Z] 5570.40 IOPS, 43.52 MiB/s 00:29:50.526 Latency(us) 00:29:50.526 [2024-10-30T11:41:23.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.526 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:50.526 Verification LBA range: start 0x0 length 0x1000 00:29:50.526 Nvme1n1 : 10.06 5551.60 43.37 0.00 0.00 22902.22 3070.48 43496.49 00:29:50.526 [2024-10-30T11:41:23.207Z] =================================================================================================================== 00:29:50.526 [2024-10-30T11:41:23.207Z] Total : 5551.60 43.37 0.00 0.00 22902.22 3070.48 43496.49 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=762660 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.526 { 00:29:50.526 "params": { 00:29:50.526 "name": "Nvme$subsystem", 00:29:50.526 "trtype": "$TEST_TRANSPORT", 00:29:50.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.526 "adrfam": "ipv4", 00:29:50.526 "trsvcid": "$NVMF_PORT", 00:29:50.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.526 "hdgst": ${hdgst:-false}, 00:29:50.526 "ddgst": ${ddgst:-false} 00:29:50.526 }, 00:29:50.526 "method": "bdev_nvme_attach_controller" 00:29:50.526 } 00:29:50.526 EOF 00:29:50.526 )") 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:50.526 [2024-10-30 12:41:23.089041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.526 [2024-10-30 12:41:23.089080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:50.526 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.526 "params": { 00:29:50.526 "name": "Nvme1", 00:29:50.526 "trtype": "tcp", 00:29:50.526 "traddr": "10.0.0.2", 00:29:50.526 "adrfam": "ipv4", 00:29:50.526 "trsvcid": "4420", 00:29:50.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.526 "hdgst": false, 00:29:50.526 "ddgst": false 00:29:50.526 }, 00:29:50.526 "method": "bdev_nvme_attach_controller" 00:29:50.526 }' 00:29:50.526 [2024-10-30 12:41:23.096989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.526 [2024-10-30 12:41:23.097010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.526 [2024-10-30 12:41:23.104987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.526 [2024-10-30 12:41:23.105007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.526 [2024-10-30 12:41:23.112987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.526 [2024-10-30 12:41:23.113007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.526 [2024-10-30 12:41:23.120986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.526 [2024-10-30 12:41:23.121005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.526 [2024-10-30 12:41:23.128987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.526 [2024-10-30 12:41:23.129006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.526 [2024-10-30 12:41:23.132887] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:29:50.526 [2024-10-30 12:41:23.132978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762660 ] 00:29:50.526 [2024-10-30 12:41:23.136986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.526 [2024-10-30 12:41:23.137006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.527 [2024-10-30 12:41:23.144987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.527 [2024-10-30 12:41:23.145006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.527 [2024-10-30 12:41:23.152986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.527 [2024-10-30 12:41:23.153005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.527 [2024-10-30 12:41:23.160987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.527 [2024-10-30 12:41:23.161015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.527 [2024-10-30 12:41:23.168989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.527 [2024-10-30 12:41:23.169009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.527 [2024-10-30 12:41:23.176988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.527 [2024-10-30 12:41:23.177009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.527 [2024-10-30 12:41:23.184988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.527 [2024-10-30 12:41:23.185008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.527 [2024-10-30 12:41:23.192988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.527 [2024-10-30 12:41:23.193008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.527 [2024-10-30 12:41:23.200988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.527 [2024-10-30 12:41:23.201008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.527 [2024-10-30 12:41:23.201414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.527 [2024-10-30 12:41:23.209022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.527 [2024-10-30 12:41:23.209053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.786 [2024-10-30 12:41:23.217027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.786 [2024-10-30 12:41:23.217069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.786 [2024-10-30 12:41:23.224998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.786 [2024-10-30 12:41:23.225021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.786 [2024-10-30 12:41:23.232992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.786 [2024-10-30 12:41:23.233013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.786 [2024-10-30 12:41:23.240994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.786 [2024-10-30 12:41:23.241016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.786 [2024-10-30 12:41:23.248990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.786 [2024-10-30 12:41:23.249012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.786 [2024-10-30 12:41:23.256992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.786 [2024-10-30 12:41:23.257018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.786 [2024-10-30 12:41:23.264990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.265010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.265901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.787 [2024-10-30 12:41:23.272989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.273010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.281024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.281055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.289025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.289064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.297021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.297072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.305018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.305066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.313019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.313067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.321020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.321055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.329003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.329032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.337007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.337038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.345023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.345060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.353022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.353063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.360997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.361020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.368989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.369009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.377342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.377370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.384997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.385022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.392995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.393018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.400996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.401019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.408995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.409019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.416995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.417018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.424995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.425018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.787 [2024-10-30 12:41:23.432996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.787 [2024-10-30 12:41:23.433019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.047 [2024-10-30 12:41:23.475475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.047 [2024-10-30 12:41:23.475504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.047 [2024-10-30 12:41:23.480996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.047 [2024-10-30 12:41:23.481019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.047 [2024-10-30 12:41:23.488995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.047 [2024-10-30 12:41:23.489026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 Running I/O for 5 seconds... 00:29:51.048 [2024-10-30 12:41:23.504142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.504170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.517020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.517049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.526633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.526660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.538804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.538832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.550420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.550446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.561228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.561264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.572435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.572462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.584014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.584041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.597508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.597536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.607355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.607383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.619483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.619509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.630150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.630175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.641758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.641783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.652691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.652717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.663732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.663758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.678019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.678045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.687049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.687074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.702245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.702278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.712886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.712921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.048 [2024-10-30 12:41:23.725197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.048 [2024-10-30 12:41:23.725223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.736873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.736899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.747998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.748025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.762627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.762668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.771798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.771824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.784081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.784118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.798785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.798812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.808502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.808528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.821040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.821065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.831786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.831812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.847214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.847264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.861964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.861992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.871738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.871764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.883787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.883815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.898871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.898897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.908460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.908487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.921182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.921207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.932074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.932101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.946309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.946343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.955901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.955927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.968881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.968907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.308 [2024-10-30 12:41:23.980331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.308 [2024-10-30 12:41:23.980359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:23.991670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:23.991698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.005101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.005127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.014862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.014901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.029793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.029817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.040637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.040663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.051730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.051757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.065106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.065133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.074487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.074514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.087280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.087311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.098881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.098906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.109638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.109664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.120665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.120692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.132124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.132149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.147807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.147833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.160617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.568 [2024-10-30 12:41:24.160644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.568 [2024-10-30 12:41:24.170277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.569 [2024-10-30 12:41:24.170303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.569 [2024-10-30 12:41:24.185136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.569 [2024-10-30 12:41:24.185175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.569 [2024-10-30 12:41:24.196017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.569 [2024-10-30 12:41:24.196042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.569 [2024-10-30 12:41:24.208044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.569 [2024-10-30 12:41:24.208069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.569 [2024-10-30 12:41:24.221372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.569 [2024-10-30 12:41:24.221399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.569 [2024-10-30 12:41:24.230898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.569 [2024-10-30 12:41:24.230924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.569 [2024-10-30 12:41:24.246044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.569 [2024-10-30 12:41:24.246088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.256697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.256723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.267570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.267597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.282099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.282126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.291632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.291658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.304049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.304075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.318338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.318365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.328163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.328189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.340357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.340385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.351808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.351835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.367073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.367100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.381190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.381217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.390956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.390981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.403285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.403327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.414285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.414313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.425628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.425653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.434977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.435002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.450210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.450234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.460890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.460932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.471638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.471663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.482959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.482986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 11316.00 IOPS, 88.41 MiB/s [2024-10-30T11:41:24.510Z] [2024-10-30 12:41:24.494445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.494473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.829 [2024-10-30 12:41:24.505556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.829 [2024-10-30 12:41:24.505583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.516686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.516713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.527638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.527664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.540739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.540765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.550249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.550296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.562283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.562311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.573702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.573729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.585323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.585350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.596781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.596809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.608292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.608342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.621253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.621289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.630825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.630849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.643159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.643185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.654536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.654581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.666318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.666345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.677467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.677508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.687783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.687807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.700660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.700699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.711904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.711930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.725714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.725740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.735325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.735352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.747281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.747308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.088 [2024-10-30 12:41:24.762954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.088 [2024-10-30 12:41:24.762979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.346 [2024-10-30 12:41:24.773131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.346 [2024-10-30 12:41:24.773173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.346 [2024-10-30 12:41:24.785564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.346 [2024-10-30 12:41:24.785590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.795685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.795712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.807746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.807772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.823687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.823713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.834649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.834681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.849749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.849775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.859864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.859890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.872139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.872166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.885412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.885440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.894951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.894977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.907584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.907609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.922352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.922379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.931743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.931767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.943978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.944003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.959552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.959592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.974106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.974132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.983870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.983895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:24.995880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:24.995906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:25.010800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:25.010841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.347 [2024-10-30 12:41:25.020428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.347 [2024-10-30 12:41:25.020455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.032874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.032901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.043753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.043780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.059049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.059075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.069027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.069071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.081041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.081066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.094704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.094743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.103987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.104011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.116217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.116266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.129218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.129268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.138467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.138494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.151198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.151223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.162359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.162387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.173309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.173336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.184338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.184364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.605 [2024-10-30 12:41:25.197250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.605 [2024-10-30 12:41:25.197285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.606 [2024-10-30 12:41:25.206787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.606 [2024-10-30 12:41:25.206813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.606 [2024-10-30 12:41:25.218971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.606 [2024-10-30 12:41:25.218997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.606 [2024-10-30 12:41:25.230341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.606 [2024-10-30 12:41:25.230368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.606 [2024-10-30 12:41:25.241287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.606 [2024-10-30 12:41:25.241327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.606 [2024-10-30 12:41:25.252346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.606 [2024-10-30 12:41:25.252373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.606 [2024-10-30 12:41:25.266485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.606 [2024-10-30 12:41:25.266512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.606 [2024-10-30 12:41:25.276368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.606 [2024-10-30 12:41:25.276394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.606 [2024-10-30 12:41:25.288681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.606 [2024-10-30 12:41:25.288713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.300165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.300191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.313371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.313397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.323209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.323233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.335400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.335426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.348699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.348725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.358337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.358364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.370604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.370630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.386854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.386878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.397445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.397471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.408184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.408209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.422638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.422664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.432204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.432229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.444625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.444650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.455796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.455822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.469533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.469575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.478929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.478953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.494381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.494408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 11316.50 IOPS, 88.41 MiB/s [2024-10-30T11:41:25.546Z] [2024-10-30 12:41:25.505739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.505766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.517047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.517074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.528186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.528213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.865 [2024-10-30 12:41:25.540037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.865 [2024-10-30 12:41:25.540063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.554033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.554076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.564199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.564224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.576303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.576330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.587769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.587796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.601640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.601666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.611463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.611490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.623948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.623973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.639046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.639085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.648763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.648788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.661457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.661484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.671862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.671887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.684816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.684840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.695745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.695771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.710268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.710294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.720874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.720897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.733009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.733033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.743720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.743746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.759226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.759250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.768989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.769016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.780167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.780193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.791789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.791813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.125 [2024-10-30 12:41:25.804999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.125 [2024-10-30 12:41:25.805025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.814678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.814705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.826892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.826918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.838376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.838403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.849683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.849709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.866463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.866490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.876597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.876622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.889101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.889140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.900221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.900271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.914634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.914660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.923948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.923973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.936526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.936553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.950790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.950815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.961488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.961515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.972401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.972428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.985927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.985966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:25.995110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:25.995137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.385 [2024-10-30 12:41:26.010052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.385 [2024-10-30 12:41:26.010080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.386 [2024-10-30 12:41:26.021214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.386 [2024-10-30 12:41:26.021263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.386 [2024-10-30 12:41:26.032324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.386 [2024-10-30 12:41:26.032351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.386 [2024-10-30 12:41:26.046573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.386 [2024-10-30 12:41:26.046600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.386 [2024-10-30 12:41:26.056199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.386 [2024-10-30 12:41:26.056226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.386 [2024-10-30 12:41:26.068685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.386 [2024-10-30 12:41:26.068711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.082198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.082223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.092122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.092148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.104105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.104132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.118097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.118139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.127696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.127721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.140034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.140060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.155497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.155524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.170451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.170478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.180199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.180225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.192414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.192450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.205607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.205633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.215210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.215250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.230151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.230176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.645 [2024-10-30 12:41:26.241217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.645 [2024-10-30 12:41:26.241262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.646 [2024-10-30 12:41:26.253229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.646 [2024-10-30 12:41:26.253262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.646 [2024-10-30 12:41:26.264621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.646 [2024-10-30 12:41:26.264647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.646 [2024-10-30 12:41:26.276005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.646 [2024-10-30 12:41:26.276031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.646 [2024-10-30 12:41:26.289439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.646 [2024-10-30 12:41:26.289466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.646 [2024-10-30 12:41:26.299198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.646 [2024-10-30 12:41:26.299223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.646 [2024-10-30 12:41:26.314327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.646 [2024-10-30 12:41:26.314353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.646 [2024-10-30 12:41:26.324406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.646 [2024-10-30 12:41:26.324433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.336172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.336199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.350513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.350556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.360368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.360395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.372072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.372098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.385185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.385211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.394643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.394668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.409839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.409864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.420604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.420638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.431851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.431878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.446851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.446877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.456395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.456422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.468876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.468900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.480251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.480298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.494696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.494723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 11281.67 IOPS, 88.14 MiB/s [2024-10-30T11:41:26.587Z] [2024-10-30 12:41:26.505030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.505057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.517228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.517278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.528304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.528330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.539493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.539522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.550449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.550476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.561521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.561548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.572846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.572870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.906 [2024-10-30 12:41:26.583788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.906 [2024-10-30 12:41:26.583814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.596958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.165 [2024-10-30 12:41:26.596993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.607145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.165 [2024-10-30 12:41:26.607172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.622079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.165 [2024-10-30 12:41:26.622106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.633028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.165 [2024-10-30 12:41:26.633056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.644033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.165 [2024-10-30 12:41:26.644072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.658125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.165 [2024-10-30 12:41:26.658167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.668163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.165 [2024-10-30 12:41:26.668204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.681097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.165 [2024-10-30 12:41:26.681123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.692444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.165 [2024-10-30 12:41:26.692470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.705482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.165 [2024-10-30 12:41:26.705509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.165 [2024-10-30 12:41:26.714712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.714738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.730425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.730452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.740973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.740999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.752487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.752515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.765526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.765553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.774700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.774726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.789888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.789915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.800422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.800449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.812302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.812329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.826897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.826938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.837929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.837954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.166 [2024-10-30 12:41:26.849115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.166 [2024-10-30 12:41:26.849141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.860207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.860250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.873651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.873677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.883341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.883368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.895663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.895688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.910182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.910207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.920489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.920515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.932292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.932319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.945294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.945321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.955379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.955405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.967814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.967840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.980883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.980909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:26.990472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.424 [2024-10-30 12:41:26.990499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.424 [2024-10-30 12:41:27.002878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.425 [2024-10-30 12:41:27.002903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.425 [2024-10-30 12:41:27.014208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.425 [2024-10-30 12:41:27.014233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.425 [2024-10-30 12:41:27.025269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.425 [2024-10-30 12:41:27.025296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.425 [2024-10-30 12:41:27.036705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.425 [2024-10-30 12:41:27.036730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.425 [2024-10-30 12:41:27.048287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.425 [2024-10-30 12:41:27.048314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.425 [2024-10-30 12:41:27.061928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.425 [2024-10-30 12:41:27.061952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.425 [2024-10-30 12:41:27.071641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.425 [2024-10-30 12:41:27.071665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.425 [2024-10-30 12:41:27.083704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.425 [2024-10-30 12:41:27.083730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.425 [2024-10-30 12:41:27.098309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.425 [2024-10-30 12:41:27.098336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.108098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.108138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.120476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.120502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.133985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.134011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.144513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.144538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.159157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.159184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.175162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.175188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.189363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.189390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.199380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.199407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.211743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.211768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.227897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.227921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.238129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.238156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.254609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.254633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.264439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.264466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.276817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.276842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.288413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.288439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.301909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.301935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.311970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.311996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.323927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.323952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.336956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.336982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.347142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.347167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.683 [2024-10-30 12:41:27.362661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.683 [2024-10-30 12:41:27.362686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.374002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.374026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.385027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.385054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.396558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.396584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.408051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.408075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.423975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.424001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.436476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.436503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.446657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.446683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.459320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.459347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.470638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.470663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.481729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.481755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.492436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.492462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 11286.25 IOPS, 88.17 MiB/s [2024-10-30T11:41:27.622Z] [2024-10-30 12:41:27.506386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.506413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.516141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.516167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.528543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.528584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.540041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.540067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.554060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.554093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.563741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.563766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.576282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.576309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.589424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.589451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.598958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.598983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.941 [2024-10-30 12:41:27.614269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.941 [2024-10-30 12:41:27.614308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.625138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.625164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.636115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.636142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.647708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.647735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.662547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.662588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.672129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.672154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.684421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.684448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.699016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.699056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.708883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.708909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.721126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.721152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.732390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.732417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.745953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.745979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.755861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.755888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.768445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.768473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.782549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.782588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.793172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.793197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.805622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.805652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.816639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.816669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.828107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.828133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.839948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.839972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.854378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.854405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.864008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.864034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.199 [2024-10-30 12:41:27.876498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.199 [2024-10-30 12:41:27.876525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.459 [2024-10-30 12:41:27.887524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.459 [2024-10-30 12:41:27.887553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.459 [2024-10-30 12:41:27.902923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.459 [2024-10-30 12:41:27.902962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.459 [2024-10-30 12:41:27.912583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.459 [2024-10-30 12:41:27.912609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.459 [2024-10-30 12:41:27.924961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.459 [2024-10-30 12:41:27.924987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.459 [2024-10-30 12:41:27.936334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.459 [2024-10-30 12:41:27.936361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.459 [2024-10-30 12:41:27.947446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.459 [2024-10-30 12:41:27.947472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:27.958586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:27.958613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:27.969863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:27.969889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:27.981212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:27.981237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:27.992254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:27.992290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.003088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.003123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.014619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.014645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.025797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.025824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.037225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.037252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.048219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.048268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.062419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.062446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.071730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.071757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.084447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.084473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.098293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.098335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.107641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.107666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.120215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.120240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.460 [2024-10-30 12:41:28.134696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.460 [2024-10-30 12:41:28.134737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.143913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.143943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.156364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.156392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.167983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.168007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.178791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.178816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.190298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.190324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.201321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.201347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.212116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.212142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.227457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.227492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.237434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.237461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.251843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.251868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.265218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.265268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.274956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.274983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.290409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.290435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.301100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.301126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.313474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.313501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.323425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.323453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.335157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.335184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.350099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.350125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.359648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.359674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.371973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.372000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.387519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.387559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.721 [2024-10-30 12:41:28.402342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.721 [2024-10-30 12:41:28.402369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.980 [2024-10-30 12:41:28.412152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.980 [2024-10-30 12:41:28.412179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.424702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.424728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.436003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.436029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.449597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.449623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.459388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.459417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.471670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.471697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.487311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.487338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.502169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.502195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 11281.40 IOPS, 88.14 MiB/s [2024-10-30T11:41:28.662Z] [2024-10-30 12:41:28.511010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.511036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 00:29:55.981 Latency(us) 00:29:55.981 [2024-10-30T11:41:28.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.981 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:55.981 Nvme1n1 : 5.01 11285.70 88.17 0.00 0.00 11328.14 2815.62 18252.99 00:29:55.981 [2024-10-30T11:41:28.662Z] =================================================================================================================== 00:29:55.981 [2024-10-30T11:41:28.662Z] Total : 11285.70 88.17 0.00 0.00 11328.14 2815.62 18252.99 00:29:55.981 [2024-10-30 12:41:28.517010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.517033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.524995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.525019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.532994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.533015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.541044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.541091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.549042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.549091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.557034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.557081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.565039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.565086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.573029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.573076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.581038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.581086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.589034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.589081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.597034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.597082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.605036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.605085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.613041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.613089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.621040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.621090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.629037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.629085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.637038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.637084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.645036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.645082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.653037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.653079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.981 [2024-10-30 12:41:28.660996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.981 [2024-10-30 12:41:28.661018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.240 [2024-10-30 12:41:28.668990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.240 [2024-10-30 12:41:28.669011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.240 [2024-10-30 12:41:28.676987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.240 [2024-10-30 12:41:28.677007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.240 [2024-10-30 12:41:28.684987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.240 [2024-10-30 12:41:28.685006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.240 [2024-10-30 12:41:28.693018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.240 [2024-10-30 12:41:28.693051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.240 [2024-10-30 12:41:28.701034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.240 [2024-10-30 12:41:28.701079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.240 [2024-10-30 12:41:28.717039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.240 [2024-10-30 12:41:28.717077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.240 [2024-10-30 12:41:28.724987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.240 [2024-10-30 12:41:28.725007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.240 [2024-10-30 12:41:28.732987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:56.240 [2024-10-30 12:41:28.733007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:56.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (762660) - No such process 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 762660 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:56.240 delay0 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.240 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:56.240 [2024-10-30 12:41:28.892418] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:04.361 Initializing NVMe Controllers 00:30:04.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:04.361 Initialization complete. Launching workers. 00:30:04.361 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 273, failed: 14552 00:30:04.361 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14732, failed to submit 93 00:30:04.361 success 14641, unsuccessful 91, failed 0 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.361 rmmod nvme_tcp 00:30:04.361 rmmod nvme_fabrics 00:30:04.361 rmmod nvme_keyring 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 761451 ']' 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 761451 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 761451 ']' 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 761451 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 761451 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 761451' 00:30:04.361 killing process with pid 761451 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 761451 00:30:04.361 12:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 761451 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.361 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.745 00:30:05.745 real 0m28.596s 00:30:05.745 user 0m40.539s 00:30:05.745 sys 0m10.067s 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:05.745 ************************************ 00:30:05.745 END TEST nvmf_zcopy 00:30:05.745 ************************************ 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:05.745 ************************************ 00:30:05.745 START TEST nvmf_nmic 00:30:05.745 ************************************ 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:05.745 * Looking for test storage... 00:30:05.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:05.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.745 --rc genhtml_branch_coverage=1 00:30:05.745 --rc genhtml_function_coverage=1 00:30:05.745 --rc genhtml_legend=1 00:30:05.745 --rc geninfo_all_blocks=1 00:30:05.745 --rc geninfo_unexecuted_blocks=1 00:30:05.745 00:30:05.745 ' 00:30:05.745 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:05.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.745 --rc genhtml_branch_coverage=1 00:30:05.746 --rc genhtml_function_coverage=1 00:30:05.746 --rc genhtml_legend=1 00:30:05.746 --rc geninfo_all_blocks=1 00:30:05.746 --rc geninfo_unexecuted_blocks=1 00:30:05.746 00:30:05.746 ' 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:05.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.746 --rc genhtml_branch_coverage=1 00:30:05.746 --rc genhtml_function_coverage=1 00:30:05.746 --rc genhtml_legend=1 00:30:05.746 --rc geninfo_all_blocks=1 00:30:05.746 --rc geninfo_unexecuted_blocks=1 00:30:05.746 00:30:05.746 ' 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:05.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.746 --rc genhtml_branch_coverage=1 00:30:05.746 --rc genhtml_function_coverage=1 00:30:05.746 --rc genhtml_legend=1 00:30:05.746 --rc geninfo_all_blocks=1 00:30:05.746 --rc geninfo_unexecuted_blocks=1 00:30:05.746 00:30:05.746 ' 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:05.746 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:08.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:08.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:08.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:08.281 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:08.281 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:30:08.282 00:30:08.282 --- 10.0.0.2 ping statistics --- 00:30:08.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.282 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:30:08.282 00:30:08.282 --- 10.0.0.1 ping statistics --- 00:30:08.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.282 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=766155 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 766155 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 766155 ']' 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.282 [2024-10-30 12:41:40.674278] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:08.282 [2024-10-30 12:41:40.675479] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:30:08.282 [2024-10-30 12:41:40.675539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.282 [2024-10-30 12:41:40.753412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:08.282 [2024-10-30 12:41:40.814900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.282 [2024-10-30 12:41:40.814962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.282 [2024-10-30 12:41:40.814975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.282 [2024-10-30 12:41:40.814990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.282 [2024-10-30 12:41:40.815000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.282 [2024-10-30 12:41:40.816541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.282 [2024-10-30 12:41:40.816674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:08.282 [2024-10-30 12:41:40.816745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:08.282 [2024-10-30 12:41:40.816749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.282 [2024-10-30 12:41:40.913220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:08.282 [2024-10-30 12:41:40.913458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:08.282 [2024-10-30 12:41:40.913725] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:08.282 [2024-10-30 12:41:40.914396] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:08.282 [2024-10-30 12:41:40.914642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:08.282 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.543 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.543 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:08.543 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.543 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.543 [2024-10-30 12:41:40.969450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.543 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.543 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:08.543 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.543 12:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.543 Malloc0 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.543 [2024-10-30 12:41:41.037647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:08.543 test case1: single bdev can't be used in multiple subsystems 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.543 [2024-10-30 12:41:41.061343] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:08.543 [2024-10-30 12:41:41.061375] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:08.543 [2024-10-30 12:41:41.061390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.543 request: 00:30:08.543 { 00:30:08.543 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:08.543 "namespace": { 00:30:08.543 "bdev_name": "Malloc0", 00:30:08.543 "no_auto_visible": false 00:30:08.543 }, 00:30:08.543 "method": "nvmf_subsystem_add_ns", 00:30:08.543 "req_id": 1 00:30:08.543 } 00:30:08.543 Got JSON-RPC error response 00:30:08.543 response: 00:30:08.543 { 00:30:08.543 "code": -32602, 00:30:08.543 "message": "Invalid parameters" 00:30:08.543 } 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:08.543 Adding namespace failed - expected result. 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:08.543 test case2: host connect to nvmf target in multiple paths 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.543 [2024-10-30 12:41:41.069446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.543 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:08.803 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:08.803 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:08.803 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:30:08.803 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:08.803 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:08.803 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:30:11.339 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:11.339 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:11.339 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:11.339 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:11.339 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:11.339 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:30:11.339 12:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:11.339 [global] 00:30:11.339 thread=1 00:30:11.339 invalidate=1 00:30:11.339 rw=write 00:30:11.339 time_based=1 00:30:11.339 runtime=1 00:30:11.339 ioengine=libaio 00:30:11.339 direct=1 00:30:11.339 bs=4096 00:30:11.339 iodepth=1 00:30:11.339 norandommap=0 00:30:11.339 numjobs=1 00:30:11.339 00:30:11.339 verify_dump=1 00:30:11.339 verify_backlog=512 00:30:11.339 verify_state_save=0 00:30:11.339 do_verify=1 00:30:11.339 verify=crc32c-intel 00:30:11.339 [job0] 00:30:11.339 filename=/dev/nvme0n1 00:30:11.339 Could not set queue depth (nvme0n1) 00:30:11.339 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:11.339 fio-3.35 00:30:11.339 Starting 1 thread 00:30:12.277 00:30:12.277 job0: (groupid=0, jobs=1): err= 0: pid=766541: Wed Oct 30 12:41:44 2024 00:30:12.277 read: IOPS=22, BW=88.9KiB/s (91.0kB/s)(92.0KiB/1035msec) 00:30:12.277 slat (nsec): min=9227, max=32700, avg=27249.52, stdev=8007.14 00:30:12.277 clat (usec): min=40905, max=41018, avg=40966.92, stdev=26.71 00:30:12.277 lat (usec): min=40937, max=41032, avg=40994.17, stdev=22.55 00:30:12.277 clat percentiles (usec): 00:30:12.277 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:12.277 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:12.277 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:12.277 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:12.277 | 99.99th=[41157] 00:30:12.277 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:30:12.277 slat (nsec): min=6611, max=35915, avg=8884.09, stdev=2466.85 00:30:12.277 clat (usec): min=145, max=249, avg=167.73, stdev=25.68 00:30:12.277 lat (usec): min=153, max=282, avg=176.61, stdev=26.08 00:30:12.277 clat percentiles (usec): 00:30:12.277 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:30:12.277 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 161], 00:30:12.277 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 202], 95.00th=[ 243], 00:30:12.277 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 249], 99.95th=[ 249], 00:30:12.277 | 99.99th=[ 249] 00:30:12.277 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:30:12.277 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:12.277 lat (usec) : 250=95.70% 00:30:12.277 lat (msec) : 50=4.30% 00:30:12.277 cpu : usr=0.29%, sys=0.39%, ctx=535, majf=0, minf=1 00:30:12.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:12.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:12.277 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:12.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:12.277 00:30:12.277 Run status group 0 (all jobs): 00:30:12.277 READ: bw=88.9KiB/s (91.0kB/s), 88.9KiB/s-88.9KiB/s (91.0kB/s-91.0kB/s), io=92.0KiB (94.2kB), run=1035-1035msec 00:30:12.277 WRITE: bw=1979KiB/s (2026kB/s), 1979KiB/s-1979KiB/s (2026kB/s-2026kB/s), io=2048KiB (2097kB), run=1035-1035msec 00:30:12.277 00:30:12.277 Disk stats (read/write): 00:30:12.277 nvme0n1: ios=69/512, merge=0/0, ticks=809/84, in_queue=893, util=91.78% 00:30:12.277 12:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:12.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:12.537 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:12.537 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:30:12.537 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:12.537 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:12.537 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:12.538 rmmod nvme_tcp 00:30:12.538 rmmod nvme_fabrics 00:30:12.538 rmmod nvme_keyring 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 766155 ']' 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 766155 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 766155 ']' 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 766155 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 766155 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 766155' 00:30:12.538 killing process with pid 766155 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 766155 00:30:12.538 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 766155 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.796 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:15.330 00:30:15.330 real 0m9.282s 00:30:15.330 user 0m17.366s 00:30:15.330 sys 0m3.385s 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:15.330 ************************************ 00:30:15.330 END TEST nvmf_nmic 00:30:15.330 ************************************ 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:15.330 ************************************ 00:30:15.330 START TEST nvmf_fio_target 00:30:15.330 ************************************ 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:15.330 * Looking for test storage... 00:30:15.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:15.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.330 --rc genhtml_branch_coverage=1 00:30:15.330 --rc genhtml_function_coverage=1 00:30:15.330 --rc genhtml_legend=1 00:30:15.330 --rc geninfo_all_blocks=1 00:30:15.330 --rc geninfo_unexecuted_blocks=1 00:30:15.330 00:30:15.330 ' 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:15.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.330 --rc genhtml_branch_coverage=1 00:30:15.330 --rc genhtml_function_coverage=1 00:30:15.330 --rc genhtml_legend=1 00:30:15.330 --rc geninfo_all_blocks=1 00:30:15.330 --rc geninfo_unexecuted_blocks=1 00:30:15.330 00:30:15.330 ' 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:15.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.330 --rc genhtml_branch_coverage=1 00:30:15.330 --rc genhtml_function_coverage=1 00:30:15.330 --rc genhtml_legend=1 00:30:15.330 --rc geninfo_all_blocks=1 00:30:15.330 --rc geninfo_unexecuted_blocks=1 00:30:15.330 00:30:15.330 ' 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:15.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.330 --rc genhtml_branch_coverage=1 00:30:15.330 --rc genhtml_function_coverage=1 00:30:15.330 --rc genhtml_legend=1 00:30:15.330 --rc geninfo_all_blocks=1 00:30:15.330 --rc geninfo_unexecuted_blocks=1 00:30:15.330 00:30:15.330 ' 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.330 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.331 12:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:17.247 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:17.247 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:17.247 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.247 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:17.247 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.248 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.530 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.530 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.530 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.530 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:30:17.530 00:30:17.530 --- 10.0.0.2 ping statistics --- 00:30:17.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.531 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:30:17.531 00:30:17.531 --- 10.0.0.1 ping statistics --- 00:30:17.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.531 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=768740 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 768740 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 768740 ']' 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:17.531 12:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.531 [2024-10-30 12:41:50.045158] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:17.531 [2024-10-30 12:41:50.046316] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:30:17.531 [2024-10-30 12:41:50.046390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.531 [2024-10-30 12:41:50.119707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:17.531 [2024-10-30 12:41:50.178885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.531 [2024-10-30 12:41:50.178941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.531 [2024-10-30 12:41:50.178964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.531 [2024-10-30 12:41:50.178975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.531 [2024-10-30 12:41:50.178984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.531 [2024-10-30 12:41:50.180717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.531 [2024-10-30 12:41:50.180783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.531 [2024-10-30 12:41:50.184276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.531 [2024-10-30 12:41:50.184288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.790 [2024-10-30 12:41:50.273820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:17.790 [2024-10-30 12:41:50.274033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:17.790 [2024-10-30 12:41:50.274327] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:17.790 [2024-10-30 12:41:50.274929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:17.790 [2024-10-30 12:41:50.275171] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:17.790 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:17.790 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:30:17.790 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.790 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:17.790 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.790 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.790 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:18.048 [2024-10-30 12:41:50.596963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.048 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:18.308 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:18.308 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:18.568 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:18.568 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:19.146 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:19.147 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:19.147 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:19.147 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:19.715 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:19.973 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:19.973 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:20.231 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:20.231 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:20.491 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:20.491 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:20.749 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:21.010 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:21.010 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:21.271 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:21.271 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:21.529 12:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.094 [2024-10-30 12:41:54.497130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.094 12:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:22.352 12:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:22.609 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:22.869 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:22.869 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:30:22.869 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:22.869 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:30:22.869 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:30:22.869 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:30:24.770 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:24.770 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:24.770 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:24.770 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:30:24.770 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:24.770 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:30:24.770 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:24.770 [global] 00:30:24.770 thread=1 00:30:24.770 invalidate=1 00:30:24.770 rw=write 00:30:24.770 time_based=1 00:30:24.770 runtime=1 00:30:24.770 ioengine=libaio 00:30:24.770 direct=1 00:30:24.770 bs=4096 00:30:24.770 iodepth=1 00:30:24.770 norandommap=0 00:30:24.770 numjobs=1 00:30:24.770 00:30:24.770 verify_dump=1 00:30:24.770 verify_backlog=512 00:30:24.770 verify_state_save=0 00:30:24.770 do_verify=1 00:30:24.770 verify=crc32c-intel 00:30:24.770 [job0] 00:30:24.770 filename=/dev/nvme0n1 00:30:24.770 [job1] 00:30:24.770 filename=/dev/nvme0n2 00:30:24.770 [job2] 00:30:24.770 filename=/dev/nvme0n3 00:30:24.770 [job3] 00:30:24.770 filename=/dev/nvme0n4 00:30:24.770 Could not set queue depth (nvme0n1) 00:30:24.770 Could not set queue depth (nvme0n2) 00:30:24.770 Could not set queue depth (nvme0n3) 00:30:24.770 Could not set queue depth (nvme0n4) 00:30:25.030 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:25.030 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:25.030 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:25.030 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:25.030 fio-3.35 00:30:25.030 Starting 4 threads 00:30:26.412 00:30:26.412 job0: (groupid=0, jobs=1): err= 0: pid=769806: Wed Oct 30 12:41:58 2024 00:30:26.412 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:30:26.412 slat (nsec): min=13135, max=37729, avg=18793.43, stdev=7595.24 00:30:26.412 clat (usec): min=458, max=42954, avg=39368.78, stdev=8496.81 00:30:26.412 lat (usec): min=482, max=42973, avg=39387.57, stdev=8495.54 00:30:26.412 clat percentiles (usec): 00:30:26.412 | 1.00th=[ 457], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:26.412 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:26.412 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:30:26.412 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:30:26.412 | 99.99th=[42730] 00:30:26.412 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:30:26.412 slat (nsec): min=7799, max=44950, avg=20103.95, stdev=7768.57 00:30:26.412 clat (usec): min=163, max=395, avg=225.23, stdev=25.88 00:30:26.412 lat (usec): min=172, max=426, avg=245.34, stdev=23.64 00:30:26.412 clat percentiles (usec): 00:30:26.412 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:30:26.412 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:30:26.412 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:30:26.412 | 99.00th=[ 314], 99.50th=[ 367], 99.90th=[ 396], 99.95th=[ 396], 00:30:26.412 | 99.99th=[ 396] 00:30:26.412 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=1 00:30:26.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:26.412 lat (usec) : 250=81.12%, 500=14.77% 00:30:26.412 lat (msec) : 50=4.11% 00:30:26.412 cpu : usr=0.77%, sys=1.06%, ctx=536, majf=0, minf=1 00:30:26.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.412 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:26.412 job1: (groupid=0, jobs=1): err= 0: pid=769807: Wed Oct 30 12:41:58 2024 00:30:26.412 read: IOPS=1921, BW=7684KiB/s (7869kB/s)(7692KiB/1001msec) 00:30:26.412 slat (nsec): min=4184, max=53698, avg=11851.44, stdev=7770.70 00:30:26.412 clat (usec): min=214, max=551, avg=296.10, stdev=83.62 00:30:26.412 lat (usec): min=221, max=564, avg=307.95, stdev=85.57 00:30:26.412 clat percentiles (usec): 00:30:26.412 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:30:26.412 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 273], 00:30:26.412 | 70.00th=[ 285], 80.00th=[ 396], 90.00th=[ 449], 95.00th=[ 461], 00:30:26.412 | 99.00th=[ 515], 99.50th=[ 519], 99.90th=[ 545], 99.95th=[ 553], 00:30:26.412 | 99.99th=[ 553] 00:30:26.412 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:26.412 slat (nsec): min=5510, max=52633, avg=8909.44, stdev=4396.38 00:30:26.412 clat (usec): min=140, max=1982, avg=184.35, stdev=58.49 00:30:26.412 lat (usec): min=147, max=1998, avg=193.26, stdev=60.18 00:30:26.412 clat percentiles (usec): 00:30:26.412 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 151], 00:30:26.412 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:30:26.412 | 70.00th=[ 204], 80.00th=[ 227], 90.00th=[ 245], 95.00th=[ 260], 00:30:26.412 | 99.00th=[ 310], 99.50th=[ 363], 99.90th=[ 412], 99.95th=[ 494], 00:30:26.412 | 99.99th=[ 1991] 00:30:26.412 bw ( KiB/s): min= 8192, max= 8192, per=59.09%, avg=8192.00, stdev= 0.00, samples=1 00:30:26.412 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:26.412 lat (usec) : 250=69.96%, 500=29.11%, 750=0.91% 00:30:26.412 lat (msec) : 2=0.03% 00:30:26.412 cpu : usr=3.20%, sys=3.30%, ctx=3971, majf=0, minf=1 00:30:26.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.412 issued rwts: total=1923,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:26.412 job2: (groupid=0, jobs=1): err= 0: pid=769808: Wed Oct 30 12:41:58 2024 00:30:26.412 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:30:26.413 slat (nsec): min=13427, max=34518, avg=17396.05, stdev=6940.99 00:30:26.413 clat (usec): min=40889, max=41042, avg=40970.96, stdev=35.93 00:30:26.413 lat (usec): min=40905, max=41058, avg=40988.35, stdev=35.39 00:30:26.413 clat percentiles (usec): 00:30:26.413 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:26.413 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:26.413 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:26.413 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:26.413 | 99.99th=[41157] 00:30:26.413 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:30:26.413 slat (nsec): min=6614, max=40045, avg=15483.13, stdev=6763.59 00:30:26.413 clat (usec): min=168, max=475, avg=233.69, stdev=23.75 00:30:26.413 lat (usec): min=186, max=512, avg=249.17, stdev=22.11 00:30:26.413 clat percentiles (usec): 00:30:26.413 | 1.00th=[ 188], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:30:26.413 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:30:26.413 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 265], 00:30:26.413 | 99.00th=[ 306], 99.50th=[ 371], 99.90th=[ 478], 99.95th=[ 478], 00:30:26.413 | 99.99th=[ 478] 00:30:26.413 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=1 00:30:26.413 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:26.413 lat (usec) : 250=77.90%, 500=17.98% 00:30:26.413 lat (msec) : 50=4.12% 00:30:26.413 cpu : usr=0.49%, sys=0.68%, ctx=534, majf=0, minf=1 00:30:26.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.413 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:26.413 job3: (groupid=0, jobs=1): err= 0: pid=769809: Wed Oct 30 12:41:58 2024 00:30:26.413 read: IOPS=23, BW=93.1KiB/s (95.3kB/s)(96.0KiB/1031msec) 00:30:26.413 slat (nsec): min=8854, max=35146, avg=18414.33, stdev=7670.68 00:30:26.413 clat (usec): min=334, max=41023, avg=37564.24, stdev=11444.17 00:30:26.413 lat (usec): min=353, max=41037, avg=37582.65, stdev=11444.20 00:30:26.413 clat percentiles (usec): 00:30:26.413 | 1.00th=[ 334], 5.00th=[ 486], 10.00th=[40633], 20.00th=[41157], 00:30:26.413 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:26.413 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:26.413 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:26.413 | 99.99th=[41157] 00:30:26.413 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:30:26.413 slat (usec): min=6, max=1094, avg=19.37, stdev=48.40 00:30:26.413 clat (usec): min=177, max=348, avg=229.13, stdev=21.86 00:30:26.413 lat (usec): min=207, max=1408, avg=248.50, stdev=54.92 00:30:26.413 clat percentiles (usec): 00:30:26.413 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 215], 00:30:26.413 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:30:26.413 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 265], 00:30:26.413 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 351], 99.95th=[ 351], 00:30:26.413 | 99.99th=[ 351] 00:30:26.413 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=1 00:30:26.413 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:26.413 lat (usec) : 250=85.82%, 500=10.07% 00:30:26.413 lat (msec) : 50=4.10% 00:30:26.413 cpu : usr=0.19%, sys=0.97%, ctx=539, majf=0, minf=1 00:30:26.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.413 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:26.413 00:30:26.413 Run status group 0 (all jobs): 00:30:26.413 READ: bw=7706KiB/s (7891kB/s), 85.4KiB/s-7684KiB/s (87.4kB/s-7869kB/s), io=7968KiB (8159kB), run=1001-1034msec 00:30:26.413 WRITE: bw=13.5MiB/s (14.2MB/s), 1981KiB/s-8184KiB/s (2028kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1034msec 00:30:26.413 00:30:26.413 Disk stats (read/write): 00:30:26.413 nvme0n1: ios=45/512, merge=0/0, ticks=1685/108, in_queue=1793, util=98.00% 00:30:26.413 nvme0n2: ios=1631/2048, merge=0/0, ticks=431/365, in_queue=796, util=87.28% 00:30:26.413 nvme0n3: ios=17/512, merge=0/0, ticks=697/117, in_queue=814, util=89.03% 00:30:26.413 nvme0n4: ios=80/512, merge=0/0, ticks=1053/119, in_queue=1172, util=98.21% 00:30:26.413 12:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:26.413 [global] 00:30:26.413 thread=1 00:30:26.413 invalidate=1 00:30:26.413 rw=randwrite 00:30:26.413 time_based=1 00:30:26.413 runtime=1 00:30:26.413 ioengine=libaio 00:30:26.413 direct=1 00:30:26.413 bs=4096 00:30:26.413 iodepth=1 00:30:26.413 norandommap=0 00:30:26.413 numjobs=1 00:30:26.413 00:30:26.413 verify_dump=1 00:30:26.413 verify_backlog=512 00:30:26.413 verify_state_save=0 00:30:26.413 do_verify=1 00:30:26.413 verify=crc32c-intel 00:30:26.413 [job0] 00:30:26.413 filename=/dev/nvme0n1 00:30:26.413 [job1] 00:30:26.413 filename=/dev/nvme0n2 00:30:26.413 [job2] 00:30:26.413 filename=/dev/nvme0n3 00:30:26.413 [job3] 00:30:26.413 filename=/dev/nvme0n4 00:30:26.413 Could not set queue depth (nvme0n1) 00:30:26.413 Could not set queue depth (nvme0n2) 00:30:26.413 Could not set queue depth (nvme0n3) 00:30:26.413 Could not set queue depth (nvme0n4) 00:30:26.672 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.672 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.672 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.672 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.672 fio-3.35 00:30:26.672 Starting 4 threads 00:30:28.047 00:30:28.047 job0: (groupid=0, jobs=1): err= 0: pid=770032: Wed Oct 30 12:42:00 2024 00:30:28.047 read: IOPS=25, BW=102KiB/s (104kB/s)(104KiB/1023msec) 00:30:28.047 slat (nsec): min=6194, max=34026, avg=18045.08, stdev=7541.44 00:30:28.047 clat (usec): min=424, max=45001, avg=34888.41, stdev=14976.02 00:30:28.047 lat (usec): min=435, max=45022, avg=34906.45, stdev=14980.22 00:30:28.047 clat percentiles (usec): 00:30:28.047 | 1.00th=[ 424], 5.00th=[ 486], 10.00th=[ 529], 20.00th=[40633], 00:30:28.047 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:28.047 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:28.047 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:30:28.047 | 99.99th=[44827] 00:30:28.047 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:30:28.047 slat (nsec): min=6829, max=45284, avg=15274.00, stdev=6355.57 00:30:28.047 clat (usec): min=160, max=2127, avg=206.46, stdev=91.12 00:30:28.047 lat (usec): min=171, max=2142, avg=221.73, stdev=91.14 00:30:28.047 clat percentiles (usec): 00:30:28.047 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:30:28.047 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:30:28.047 | 70.00th=[ 206], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 265], 00:30:28.047 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 2114], 99.95th=[ 2114], 00:30:28.047 | 99.99th=[ 2114] 00:30:28.047 bw ( KiB/s): min= 4096, max= 4096, per=34.10%, avg=4096.00, stdev= 0.00, samples=1 00:30:28.047 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:28.047 lat (usec) : 250=88.10%, 500=7.25%, 750=0.37% 00:30:28.047 lat (msec) : 4=0.19%, 50=4.09% 00:30:28.047 cpu : usr=0.49%, sys=0.68%, ctx=540, majf=0, minf=1 00:30:28.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.047 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:28.047 job1: (groupid=0, jobs=1): err= 0: pid=770033: Wed Oct 30 12:42:00 2024 00:30:28.047 read: IOPS=181, BW=727KiB/s (745kB/s)(728KiB/1001msec) 00:30:28.047 slat (nsec): min=4346, max=34692, avg=7520.17, stdev=5590.23 00:30:28.047 clat (usec): min=210, max=41973, avg=4713.58, stdev=12784.44 00:30:28.047 lat (usec): min=214, max=41988, avg=4721.10, stdev=12789.17 00:30:28.047 clat percentiles (usec): 00:30:28.047 | 1.00th=[ 217], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 225], 00:30:28.047 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 235], 00:30:28.047 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[41157], 95.00th=[41157], 00:30:28.047 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:28.047 | 99.99th=[42206] 00:30:28.047 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:30:28.047 slat (nsec): min=7334, max=43358, avg=18369.45, stdev=6953.25 00:30:28.047 clat (usec): min=143, max=1060, avg=251.30, stdev=76.95 00:30:28.047 lat (usec): min=150, max=1071, avg=269.67, stdev=76.87 00:30:28.047 clat percentiles (usec): 00:30:28.047 | 1.00th=[ 151], 5.00th=[ 176], 10.00th=[ 190], 20.00th=[ 217], 00:30:28.047 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:30:28.047 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 318], 00:30:28.047 | 99.00th=[ 545], 99.50th=[ 930], 99.90th=[ 1057], 99.95th=[ 1057], 00:30:28.047 | 99.99th=[ 1057] 00:30:28.047 bw ( KiB/s): min= 4096, max= 4096, per=34.10%, avg=4096.00, stdev= 0.00, samples=1 00:30:28.047 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:28.047 lat (usec) : 250=64.70%, 500=31.41%, 750=0.43%, 1000=0.43% 00:30:28.047 lat (msec) : 2=0.14%, 50=2.88% 00:30:28.047 cpu : usr=0.90%, sys=1.10%, ctx=695, majf=0, minf=1 00:30:28.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.047 issued rwts: total=182,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:28.047 job2: (groupid=0, jobs=1): err= 0: pid=770035: Wed Oct 30 12:42:00 2024 00:30:28.047 read: IOPS=1048, BW=4195KiB/s (4296kB/s)(4216KiB/1005msec) 00:30:28.047 slat (nsec): min=5786, max=69692, avg=15651.91, stdev=8638.45 00:30:28.047 clat (usec): min=220, max=41326, avg=554.76, stdev=3017.44 00:30:28.047 lat (usec): min=230, max=41378, avg=570.42, stdev=3018.07 00:30:28.047 clat percentiles (usec): 00:30:28.047 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:30:28.047 | 30.00th=[ 258], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:30:28.047 | 70.00th=[ 330], 80.00th=[ 424], 90.00th=[ 494], 95.00th=[ 519], 00:30:28.047 | 99.00th=[ 611], 99.50th=[37487], 99.90th=[41157], 99.95th=[41157], 00:30:28.047 | 99.99th=[41157] 00:30:28.047 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:30:28.047 slat (nsec): min=6759, max=72981, avg=19471.53, stdev=9103.54 00:30:28.047 clat (usec): min=151, max=1856, avg=234.91, stdev=89.31 00:30:28.047 lat (usec): min=162, max=1863, avg=254.38, stdev=91.79 00:30:28.047 clat percentiles (usec): 00:30:28.047 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:30:28.047 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 217], 60.00th=[ 235], 00:30:28.047 | 70.00th=[ 253], 80.00th=[ 281], 90.00th=[ 334], 95.00th=[ 379], 00:30:28.047 | 99.00th=[ 474], 99.50th=[ 644], 99.90th=[ 1012], 99.95th=[ 1860], 00:30:28.047 | 99.99th=[ 1860] 00:30:28.047 bw ( KiB/s): min= 4416, max= 7872, per=51.15%, avg=6144.00, stdev=2443.76, samples=2 00:30:28.047 iops : min= 1104, max= 1968, avg=1536.00, stdev=610.94, samples=2 00:30:28.048 lat (usec) : 250=51.70%, 500=44.25%, 750=3.55%, 1000=0.12% 00:30:28.048 lat (msec) : 2=0.15%, 50=0.23% 00:30:28.048 cpu : usr=2.79%, sys=4.68%, ctx=2591, majf=0, minf=1 00:30:28.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.048 issued rwts: total=1054,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:28.048 job3: (groupid=0, jobs=1): err= 0: pid=770037: Wed Oct 30 12:42:00 2024 00:30:28.048 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:30:28.048 slat (nsec): min=7113, max=36473, avg=21433.43, stdev=7899.94 00:30:28.048 clat (usec): min=40906, max=45956, avg=41560.56, stdev=1108.18 00:30:28.048 lat (usec): min=40941, max=45977, avg=41582.00, stdev=1108.30 00:30:28.048 clat percentiles (usec): 00:30:28.048 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:28.048 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:30:28.048 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:28.048 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:30:28.048 | 99.99th=[45876] 00:30:28.048 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:30:28.048 slat (nsec): min=7685, max=70752, avg=19289.17, stdev=7517.13 00:30:28.048 clat (usec): min=169, max=285, avg=226.21, stdev=19.03 00:30:28.048 lat (usec): min=177, max=308, avg=245.50, stdev=21.74 00:30:28.048 clat percentiles (usec): 00:30:28.048 | 1.00th=[ 184], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:30:28.048 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:30:28.048 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 258], 00:30:28.048 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 285], 00:30:28.048 | 99.99th=[ 285] 00:30:28.048 bw ( KiB/s): min= 4096, max= 4096, per=34.10%, avg=4096.00, stdev= 0.00, samples=1 00:30:28.048 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:28.048 lat (usec) : 250=84.43%, 500=11.63% 00:30:28.048 lat (msec) : 50=3.94% 00:30:28.048 cpu : usr=0.70%, sys=1.40%, ctx=534, majf=0, minf=2 00:30:28.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.048 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:28.048 00:30:28.048 Run status group 0 (all jobs): 00:30:28.048 READ: bw=5017KiB/s (5137kB/s), 83.8KiB/s-4195KiB/s (85.8kB/s-4296kB/s), io=5132KiB (5255kB), run=1001-1023msec 00:30:28.048 WRITE: bw=11.7MiB/s (12.3MB/s), 2002KiB/s-6113KiB/s (2050kB/s-6260kB/s), io=12.0MiB (12.6MB), run=1001-1023msec 00:30:28.048 00:30:28.048 Disk stats (read/write): 00:30:28.048 nvme0n1: ios=70/512, merge=0/0, ticks=856/102, in_queue=958, util=86.07% 00:30:28.048 nvme0n2: ios=41/512, merge=0/0, ticks=1605/119, in_queue=1724, util=89.96% 00:30:28.048 nvme0n3: ios=1099/1536, merge=0/0, ticks=713/342, in_queue=1055, util=94.80% 00:30:28.048 nvme0n4: ios=77/512, merge=0/0, ticks=1064/110, in_queue=1174, util=94.35% 00:30:28.048 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:28.048 [global] 00:30:28.048 thread=1 00:30:28.048 invalidate=1 00:30:28.048 rw=write 00:30:28.048 time_based=1 00:30:28.048 runtime=1 00:30:28.048 ioengine=libaio 00:30:28.048 direct=1 00:30:28.048 bs=4096 00:30:28.048 iodepth=128 00:30:28.048 norandommap=0 00:30:28.048 numjobs=1 00:30:28.048 00:30:28.048 verify_dump=1 00:30:28.048 verify_backlog=512 00:30:28.048 verify_state_save=0 00:30:28.048 do_verify=1 00:30:28.048 verify=crc32c-intel 00:30:28.048 [job0] 00:30:28.048 filename=/dev/nvme0n1 00:30:28.048 [job1] 00:30:28.048 filename=/dev/nvme0n2 00:30:28.048 [job2] 00:30:28.048 filename=/dev/nvme0n3 00:30:28.048 [job3] 00:30:28.048 filename=/dev/nvme0n4 00:30:28.048 Could not set queue depth (nvme0n1) 00:30:28.048 Could not set queue depth (nvme0n2) 00:30:28.048 Could not set queue depth (nvme0n3) 00:30:28.048 Could not set queue depth (nvme0n4) 00:30:28.048 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:28.048 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:28.048 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:28.048 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:28.048 fio-3.35 00:30:28.048 Starting 4 threads 00:30:29.422 00:30:29.422 job0: (groupid=0, jobs=1): err= 0: pid=770308: Wed Oct 30 12:42:01 2024 00:30:29.422 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:30:29.422 slat (usec): min=2, max=19334, avg=101.42, stdev=709.07 00:30:29.422 clat (usec): min=5077, max=39534, avg=13238.07, stdev=4249.10 00:30:29.422 lat (usec): min=5084, max=39539, avg=13339.49, stdev=4309.91 00:30:29.422 clat percentiles (usec): 00:30:29.422 | 1.00th=[ 7242], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10290], 00:30:29.422 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11600], 60.00th=[12780], 00:30:29.422 | 70.00th=[13829], 80.00th=[16188], 90.00th=[19530], 95.00th=[22676], 00:30:29.422 | 99.00th=[26870], 99.50th=[26870], 99.90th=[39584], 99.95th=[39584], 00:30:29.422 | 99.99th=[39584] 00:30:29.422 write: IOPS=4659, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1006msec); 0 zone resets 00:30:29.422 slat (usec): min=3, max=31227, avg=101.82, stdev=784.14 00:30:29.422 clat (usec): min=293, max=54788, avg=14195.27, stdev=6106.47 00:30:29.422 lat (usec): min=312, max=54808, avg=14297.09, stdev=6155.96 00:30:29.422 clat percentiles (usec): 00:30:29.422 | 1.00th=[ 1991], 5.00th=[ 6652], 10.00th=[ 9110], 20.00th=[10683], 00:30:29.422 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[13304], 00:30:29.422 | 70.00th=[14746], 80.00th=[18220], 90.00th=[23987], 95.00th=[26608], 00:30:29.422 | 99.00th=[33162], 99.50th=[33162], 99.90th=[33162], 99.95th=[33162], 00:30:29.422 | 99.99th=[54789] 00:30:29.422 bw ( KiB/s): min=16384, max=20480, per=28.09%, avg=18432.00, stdev=2896.31, samples=2 00:30:29.422 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:30:29.422 lat (usec) : 500=0.01%, 1000=0.09% 00:30:29.422 lat (msec) : 2=0.41%, 4=1.04%, 10=13.19%, 20=71.88%, 50=13.37% 00:30:29.422 lat (msec) : 100=0.01% 00:30:29.422 cpu : usr=2.79%, sys=8.36%, ctx=507, majf=0, minf=1 00:30:29.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:30:29.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:29.422 issued rwts: total=4608,4687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:29.422 job1: (groupid=0, jobs=1): err= 0: pid=770309: Wed Oct 30 12:42:01 2024 00:30:29.422 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:30:29.422 slat (nsec): min=1987, max=10652k, avg=89816.23, stdev=615446.96 00:30:29.422 clat (usec): min=6537, max=35359, avg=13222.74, stdev=3743.56 00:30:29.422 lat (usec): min=6542, max=35365, avg=13312.56, stdev=3787.41 00:30:29.422 clat percentiles (usec): 00:30:29.422 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10421], 00:30:29.422 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12780], 60.00th=[13173], 00:30:29.422 | 70.00th=[13829], 80.00th=[16188], 90.00th=[19006], 95.00th=[20055], 00:30:29.422 | 99.00th=[22938], 99.50th=[25560], 99.90th=[35390], 99.95th=[35390], 00:30:29.422 | 99.99th=[35390] 00:30:29.422 write: IOPS=3904, BW=15.2MiB/s (16.0MB/s)(15.4MiB/1012msec); 0 zone resets 00:30:29.422 slat (usec): min=3, max=16390, avg=158.19, stdev=977.71 00:30:29.422 clat (usec): min=3365, max=81621, avg=20424.36, stdev=18317.98 00:30:29.422 lat (usec): min=3380, max=81626, avg=20582.55, stdev=18425.05 00:30:29.422 clat percentiles (usec): 00:30:29.422 | 1.00th=[ 4424], 5.00th=[ 8029], 10.00th=[ 9503], 20.00th=[10683], 00:30:29.422 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11731], 60.00th=[12256], 00:30:29.422 | 70.00th=[15926], 80.00th=[26608], 90.00th=[56886], 95.00th=[68682], 00:30:29.422 | 99.00th=[74974], 99.50th=[74974], 99.90th=[81265], 99.95th=[81265], 00:30:29.422 | 99.99th=[81265] 00:30:29.422 bw ( KiB/s): min=10112, max=20480, per=23.31%, avg=15296.00, stdev=7331.28, samples=2 00:30:29.422 iops : min= 2528, max= 5120, avg=3824.00, stdev=1832.82, samples=2 00:30:29.422 lat (msec) : 4=0.03%, 10=13.58%, 20=69.74%, 50=10.70%, 100=5.96% 00:30:29.422 cpu : usr=1.58%, sys=3.76%, ctx=300, majf=0, minf=1 00:30:29.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:29.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:29.422 issued rwts: total=3584,3951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:29.422 job2: (groupid=0, jobs=1): err= 0: pid=770313: Wed Oct 30 12:42:01 2024 00:30:29.422 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:30:29.422 slat (usec): min=2, max=15323, avg=136.88, stdev=947.73 00:30:29.422 clat (usec): min=3888, max=39213, avg=16823.16, stdev=5565.81 00:30:29.422 lat (usec): min=3907, max=48570, avg=16960.05, stdev=5628.14 00:30:29.422 clat percentiles (usec): 00:30:29.422 | 1.00th=[ 7308], 5.00th=[10028], 10.00th=[11600], 20.00th=[12256], 00:30:29.422 | 30.00th=[14091], 40.00th=[15008], 50.00th=[15533], 60.00th=[16450], 00:30:29.423 | 70.00th=[17695], 80.00th=[21365], 90.00th=[23462], 95.00th=[27132], 00:30:29.423 | 99.00th=[33817], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:30:29.423 | 99.99th=[39060] 00:30:29.423 write: IOPS=3644, BW=14.2MiB/s (14.9MB/s)(14.4MiB/1009msec); 0 zone resets 00:30:29.423 slat (usec): min=3, max=19098, avg=130.61, stdev=748.03 00:30:29.423 clat (usec): min=1044, max=42527, avg=18432.71, stdev=8027.25 00:30:29.423 lat (usec): min=1063, max=42545, avg=18563.33, stdev=8107.72 00:30:29.423 clat percentiles (usec): 00:30:29.423 | 1.00th=[ 4490], 5.00th=[ 8291], 10.00th=[10159], 20.00th=[12256], 00:30:29.423 | 30.00th=[13042], 40.00th=[13698], 50.00th=[15926], 60.00th=[20055], 00:30:29.423 | 70.00th=[23462], 80.00th=[24511], 90.00th=[27132], 95.00th=[36439], 00:30:29.423 | 99.00th=[39060], 99.50th=[39060], 99.90th=[40109], 99.95th=[41681], 00:30:29.423 | 99.99th=[42730] 00:30:29.423 bw ( KiB/s): min=13616, max=15056, per=21.85%, avg=14336.00, stdev=1018.23, samples=2 00:30:29.423 iops : min= 3404, max= 3764, avg=3584.00, stdev=254.56, samples=2 00:30:29.423 lat (msec) : 2=0.10%, 4=0.40%, 10=6.38%, 20=60.32%, 50=32.81% 00:30:29.423 cpu : usr=3.17%, sys=5.56%, ctx=422, majf=0, minf=2 00:30:29.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:30:29.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:29.423 issued rwts: total=3584,3677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:29.423 job3: (groupid=0, jobs=1): err= 0: pid=770314: Wed Oct 30 12:42:01 2024 00:30:29.423 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:30:29.423 slat (usec): min=2, max=12503, avg=121.08, stdev=707.22 00:30:29.423 clat (msec): min=4, max=114, avg=15.66, stdev= 7.87 00:30:29.423 lat (msec): min=4, max=114, avg=15.78, stdev= 7.91 00:30:29.423 clat percentiles (msec): 00:30:29.423 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:30:29.423 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:30:29.423 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 20], 95.00th=[ 30], 00:30:29.423 | 99.00th=[ 41], 99.50th=[ 54], 99.90th=[ 111], 99.95th=[ 111], 00:30:29.423 | 99.99th=[ 115] 00:30:29.423 write: IOPS=4271, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1003msec); 0 zone resets 00:30:29.423 slat (usec): min=3, max=15524, avg=104.70, stdev=605.00 00:30:29.423 clat (msec): min=2, max=110, avg=14.65, stdev= 9.09 00:30:29.423 lat (msec): min=2, max=110, avg=14.75, stdev= 9.10 00:30:29.423 clat percentiles (msec): 00:30:29.423 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 12], 00:30:29.423 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:30:29.423 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 19], 00:30:29.423 | 99.00th=[ 69], 99.50th=[ 93], 99.90th=[ 105], 99.95th=[ 105], 00:30:29.423 | 99.99th=[ 111] 00:30:29.423 bw ( KiB/s): min=16528, max=16728, per=25.34%, avg=16628.00, stdev=141.42, samples=2 00:30:29.423 iops : min= 4132, max= 4182, avg=4157.00, stdev=35.36, samples=2 00:30:29.423 lat (msec) : 4=0.11%, 10=8.03%, 20=85.55%, 50=5.21%, 100=0.74% 00:30:29.423 lat (msec) : 250=0.36% 00:30:29.423 cpu : usr=2.99%, sys=5.19%, ctx=508, majf=0, minf=1 00:30:29.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:29.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:29.423 issued rwts: total=4096,4284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:29.423 00:30:29.423 Run status group 0 (all jobs): 00:30:29.423 READ: bw=61.3MiB/s (64.2MB/s), 13.8MiB/s-17.9MiB/s (14.5MB/s-18.8MB/s), io=62.0MiB (65.0MB), run=1003-1012msec 00:30:29.423 WRITE: bw=64.1MiB/s (67.2MB/s), 14.2MiB/s-18.2MiB/s (14.9MB/s-19.1MB/s), io=64.8MiB (68.0MB), run=1003-1012msec 00:30:29.423 00:30:29.423 Disk stats (read/write): 00:30:29.423 nvme0n1: ios=3626/3983, merge=0/0, ticks=34701/38249, in_queue=72950, util=97.39% 00:30:29.423 nvme0n2: ios=3468/3584, merge=0/0, ticks=28113/25519, in_queue=53632, util=90.56% 00:30:29.423 nvme0n3: ios=2607/3071, merge=0/0, ticks=40827/59562, in_queue=100389, util=95.92% 00:30:29.423 nvme0n4: ios=3619/3614, merge=0/0, ticks=30234/30996, in_queue=61230, util=97.37% 00:30:29.423 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:29.423 [global] 00:30:29.423 thread=1 00:30:29.423 invalidate=1 00:30:29.423 rw=randwrite 00:30:29.423 time_based=1 00:30:29.423 runtime=1 00:30:29.423 ioengine=libaio 00:30:29.423 direct=1 00:30:29.423 bs=4096 00:30:29.423 iodepth=128 00:30:29.423 norandommap=0 00:30:29.423 numjobs=1 00:30:29.423 00:30:29.423 verify_dump=1 00:30:29.423 verify_backlog=512 00:30:29.423 verify_state_save=0 00:30:29.423 do_verify=1 00:30:29.423 verify=crc32c-intel 00:30:29.423 [job0] 00:30:29.423 filename=/dev/nvme0n1 00:30:29.423 [job1] 00:30:29.423 filename=/dev/nvme0n2 00:30:29.423 [job2] 00:30:29.423 filename=/dev/nvme0n3 00:30:29.423 [job3] 00:30:29.423 filename=/dev/nvme0n4 00:30:29.423 Could not set queue depth (nvme0n1) 00:30:29.423 Could not set queue depth (nvme0n2) 00:30:29.423 Could not set queue depth (nvme0n3) 00:30:29.423 Could not set queue depth (nvme0n4) 00:30:29.423 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.423 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.423 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.423 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.423 fio-3.35 00:30:29.423 Starting 4 threads 00:30:30.798 00:30:30.798 job0: (groupid=0, jobs=1): err= 0: pid=770609: Wed Oct 30 12:42:03 2024 00:30:30.798 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:30:30.798 slat (usec): min=3, max=9648, avg=148.78, stdev=848.81 00:30:30.798 clat (usec): min=3925, max=38870, avg=18952.11, stdev=6713.78 00:30:30.798 lat (usec): min=3943, max=38904, avg=19100.89, stdev=6799.56 00:30:30.798 clat percentiles (usec): 00:30:30.798 | 1.00th=[ 7898], 5.00th=[11207], 10.00th=[11469], 20.00th=[15139], 00:30:30.798 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15795], 60.00th=[16188], 00:30:30.798 | 70.00th=[20317], 80.00th=[28705], 90.00th=[29230], 95.00th=[30016], 00:30:30.798 | 99.00th=[33817], 99.50th=[34341], 99.90th=[38536], 99.95th=[38536], 00:30:30.798 | 99.99th=[39060] 00:30:30.798 write: IOPS=3526, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1004msec); 0 zone resets 00:30:30.798 slat (usec): min=4, max=10490, avg=146.47, stdev=831.44 00:30:30.798 clat (usec): min=1208, max=60518, avg=19540.87, stdev=9540.12 00:30:30.798 lat (usec): min=1220, max=60527, avg=19687.33, stdev=9625.05 00:30:30.798 clat percentiles (usec): 00:30:30.798 | 1.00th=[ 5342], 5.00th=[ 9896], 10.00th=[11469], 20.00th=[12518], 00:30:30.798 | 30.00th=[14484], 40.00th=[15139], 50.00th=[15401], 60.00th=[18220], 00:30:30.798 | 70.00th=[23462], 80.00th=[26608], 90.00th=[30278], 95.00th=[38011], 00:30:30.798 | 99.00th=[55837], 99.50th=[57410], 99.90th=[60556], 99.95th=[60556], 00:30:30.798 | 99.99th=[60556] 00:30:30.798 bw ( KiB/s): min=12568, max=14736, per=19.95%, avg=13652.00, stdev=1533.01, samples=2 00:30:30.798 iops : min= 3142, max= 3684, avg=3413.00, stdev=383.25, samples=2 00:30:30.798 lat (msec) : 2=0.11%, 4=0.33%, 10=3.40%, 20=62.07%, 50=32.90% 00:30:30.798 lat (msec) : 100=1.18% 00:30:30.798 cpu : usr=2.69%, sys=4.69%, ctx=281, majf=0, minf=2 00:30:30.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:30:30.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.798 issued rwts: total=3072,3541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.798 job1: (groupid=0, jobs=1): err= 0: pid=770610: Wed Oct 30 12:42:03 2024 00:30:30.798 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:30:30.798 slat (usec): min=2, max=9075, avg=81.08, stdev=605.20 00:30:30.798 clat (usec): min=3193, max=22013, avg=11081.36, stdev=2884.10 00:30:30.798 lat (usec): min=3205, max=22017, avg=11162.43, stdev=2911.03 00:30:30.798 clat percentiles (usec): 00:30:30.798 | 1.00th=[ 6521], 5.00th=[ 7570], 10.00th=[ 7898], 20.00th=[ 8455], 00:30:30.798 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10814], 00:30:30.798 | 70.00th=[12649], 80.00th=[13960], 90.00th=[15401], 95.00th=[16319], 00:30:30.798 | 99.00th=[18482], 99.50th=[19268], 99.90th=[20055], 99.95th=[21103], 00:30:30.798 | 99.99th=[21890] 00:30:30.798 write: IOPS=6017, BW=23.5MiB/s (24.6MB/s)(23.7MiB/1007msec); 0 zone resets 00:30:30.798 slat (usec): min=4, max=9831, avg=83.75, stdev=608.05 00:30:30.798 clat (usec): min=2346, max=21571, avg=10773.74, stdev=2667.56 00:30:30.798 lat (usec): min=2353, max=21589, avg=10857.48, stdev=2689.41 00:30:30.798 clat percentiles (usec): 00:30:30.798 | 1.00th=[ 4817], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 8455], 00:30:30.798 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10814], 60.00th=[11338], 00:30:30.798 | 70.00th=[11863], 80.00th=[13173], 90.00th=[14222], 95.00th=[14746], 00:30:30.798 | 99.00th=[17695], 99.50th=[19268], 99.90th=[19530], 99.95th=[20055], 00:30:30.798 | 99.99th=[21627] 00:30:30.798 bw ( KiB/s): min=22960, max=24496, per=34.67%, avg=23728.00, stdev=1086.12, samples=2 00:30:30.798 iops : min= 5740, max= 6124, avg=5932.00, stdev=271.53, samples=2 00:30:30.799 lat (msec) : 4=0.22%, 10=38.69%, 20=60.99%, 50=0.09% 00:30:30.799 cpu : usr=4.57%, sys=7.85%, ctx=387, majf=0, minf=1 00:30:30.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:30:30.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.799 issued rwts: total=5632,6060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.799 job2: (groupid=0, jobs=1): err= 0: pid=770626: Wed Oct 30 12:42:03 2024 00:30:30.799 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:30:30.799 slat (usec): min=3, max=11370, avg=106.46, stdev=563.41 00:30:30.799 clat (usec): min=9240, max=31437, avg=13947.91, stdev=3408.98 00:30:30.799 lat (usec): min=9262, max=31442, avg=14054.37, stdev=3437.97 00:30:30.799 clat percentiles (usec): 00:30:30.799 | 1.00th=[10552], 5.00th=[11076], 10.00th=[11469], 20.00th=[12256], 00:30:30.799 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13173], 60.00th=[13304], 00:30:30.799 | 70.00th=[13566], 80.00th=[14615], 90.00th=[15664], 95.00th=[22414], 00:30:30.799 | 99.00th=[30016], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:30:30.799 | 99.99th=[31327] 00:30:30.799 write: IOPS=4810, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1004msec); 0 zone resets 00:30:30.799 slat (usec): min=4, max=13118, avg=98.23, stdev=580.54 00:30:30.799 clat (usec): min=514, max=28208, avg=13062.29, stdev=1848.50 00:30:30.799 lat (usec): min=4265, max=28255, avg=13160.53, stdev=1906.33 00:30:30.799 clat percentiles (usec): 00:30:30.799 | 1.00th=[ 8586], 5.00th=[11207], 10.00th=[12125], 20.00th=[12387], 00:30:30.799 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:30:30.799 | 70.00th=[13173], 80.00th=[13435], 90.00th=[15008], 95.00th=[16057], 00:30:30.799 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22152], 99.95th=[22676], 00:30:30.799 | 99.99th=[28181] 00:30:30.799 bw ( KiB/s): min=18760, max=18856, per=27.48%, avg=18808.00, stdev=67.88, samples=2 00:30:30.799 iops : min= 4690, max= 4714, avg=4702.00, stdev=16.97, samples=2 00:30:30.799 lat (usec) : 750=0.01% 00:30:30.799 lat (msec) : 10=1.32%, 20=94.89%, 50=3.77% 00:30:30.799 cpu : usr=3.19%, sys=6.88%, ctx=429, majf=0, minf=2 00:30:30.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:30:30.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.799 issued rwts: total=4608,4830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.799 job3: (groupid=0, jobs=1): err= 0: pid=770631: Wed Oct 30 12:42:03 2024 00:30:30.799 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:30:30.799 slat (usec): min=3, max=8810, avg=160.33, stdev=841.45 00:30:30.799 clat (usec): min=10655, max=37524, avg=19247.83, stdev=3684.59 00:30:30.799 lat (usec): min=10663, max=37546, avg=19408.16, stdev=3783.62 00:30:30.799 clat percentiles (usec): 00:30:30.799 | 1.00th=[12256], 5.00th=[14877], 10.00th=[15139], 20.00th=[15533], 00:30:30.799 | 30.00th=[17957], 40.00th=[19006], 50.00th=[19006], 60.00th=[19268], 00:30:30.799 | 70.00th=[19792], 80.00th=[21365], 90.00th=[24249], 95.00th=[25297], 00:30:30.799 | 99.00th=[33817], 99.50th=[33817], 99.90th=[37487], 99.95th=[37487], 00:30:30.799 | 99.99th=[37487] 00:30:30.799 write: IOPS=2792, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1008msec); 0 zone resets 00:30:30.799 slat (usec): min=4, max=10087, avg=202.54, stdev=834.17 00:30:30.799 clat (usec): min=5137, max=48329, avg=27716.41, stdev=11030.60 00:30:30.799 lat (usec): min=7965, max=48335, avg=27918.95, stdev=11104.92 00:30:30.799 clat percentiles (usec): 00:30:30.799 | 1.00th=[10290], 5.00th=[13698], 10.00th=[14353], 20.00th=[15533], 00:30:30.799 | 30.00th=[19268], 40.00th=[20579], 50.00th=[24773], 60.00th=[32113], 00:30:30.799 | 70.00th=[38011], 80.00th=[40633], 90.00th=[41681], 95.00th=[42730], 00:30:30.799 | 99.00th=[45351], 99.50th=[45876], 99.90th=[48497], 99.95th=[48497], 00:30:30.799 | 99.99th=[48497] 00:30:30.799 bw ( KiB/s): min= 9208, max=12288, per=15.71%, avg=10748.00, stdev=2177.89, samples=2 00:30:30.799 iops : min= 2302, max= 3072, avg=2687.00, stdev=544.47, samples=2 00:30:30.799 lat (msec) : 10=0.52%, 20=54.66%, 50=44.82% 00:30:30.799 cpu : usr=2.38%, sys=4.47%, ctx=322, majf=0, minf=1 00:30:30.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:30.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.799 issued rwts: total=2560,2815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.799 00:30:30.799 Run status group 0 (all jobs): 00:30:30.799 READ: bw=61.5MiB/s (64.5MB/s), 9.92MiB/s-21.8MiB/s (10.4MB/s-22.9MB/s), io=62.0MiB (65.0MB), run=1004-1008msec 00:30:30.799 WRITE: bw=66.8MiB/s (70.1MB/s), 10.9MiB/s-23.5MiB/s (11.4MB/s-24.6MB/s), io=67.4MiB (70.6MB), run=1004-1008msec 00:30:30.799 00:30:30.799 Disk stats (read/write): 00:30:30.799 nvme0n1: ios=2634/3072, merge=0/0, ticks=18652/26305, in_queue=44957, util=87.17% 00:30:30.799 nvme0n2: ios=4654/5120, merge=0/0, ticks=49746/53536, in_queue=103282, util=97.97% 00:30:30.799 nvme0n3: ios=3827/4096, merge=0/0, ticks=19685/22037, in_queue=41722, util=88.82% 00:30:30.799 nvme0n4: ios=2087/2527, merge=0/0, ticks=19560/32799, in_queue=52359, util=97.26% 00:30:30.799 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:30.799 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=770800 00:30:30.799 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:30.799 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:30.799 [global] 00:30:30.799 thread=1 00:30:30.799 invalidate=1 00:30:30.799 rw=read 00:30:30.799 time_based=1 00:30:30.799 runtime=10 00:30:30.799 ioengine=libaio 00:30:30.799 direct=1 00:30:30.799 bs=4096 00:30:30.799 iodepth=1 00:30:30.799 norandommap=1 00:30:30.799 numjobs=1 00:30:30.799 00:30:30.799 [job0] 00:30:30.799 filename=/dev/nvme0n1 00:30:30.799 [job1] 00:30:30.799 filename=/dev/nvme0n2 00:30:30.799 [job2] 00:30:30.799 filename=/dev/nvme0n3 00:30:30.799 [job3] 00:30:30.799 filename=/dev/nvme0n4 00:30:30.799 Could not set queue depth (nvme0n1) 00:30:30.799 Could not set queue depth (nvme0n2) 00:30:30.799 Could not set queue depth (nvme0n3) 00:30:30.799 Could not set queue depth (nvme0n4) 00:30:30.799 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:30.799 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:30.799 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:30.799 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:30.799 fio-3.35 00:30:30.799 Starting 4 threads 00:30:34.086 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:34.086 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:34.086 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=29642752, buflen=4096 00:30:34.086 fio: pid=770965, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:34.344 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:34.344 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:34.344 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=29634560, buflen=4096 00:30:34.344 fio: pid=770959, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:34.603 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=22495232, buflen=4096 00:30:34.603 fio: pid=770956, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:34.603 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:34.603 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:34.861 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:34.861 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:34.861 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=25153536, buflen=4096 00:30:34.861 fio: pid=770957, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:34.861 00:30:34.861 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770956: Wed Oct 30 12:42:07 2024 00:30:34.861 read: IOPS=1564, BW=6257KiB/s (6407kB/s)(21.5MiB/3511msec) 00:30:34.861 slat (usec): min=4, max=12940, avg=18.39, stdev=291.16 00:30:34.861 clat (usec): min=176, max=41073, avg=612.59, stdev=3322.96 00:30:34.861 lat (usec): min=182, max=54004, avg=630.98, stdev=3364.72 00:30:34.861 clat percentiles (usec): 00:30:34.861 | 1.00th=[ 219], 5.00th=[ 239], 10.00th=[ 253], 20.00th=[ 273], 00:30:34.861 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 343], 00:30:34.861 | 70.00th=[ 396], 80.00th=[ 420], 90.00th=[ 469], 95.00th=[ 519], 00:30:34.861 | 99.00th=[ 627], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:34.861 | 99.99th=[41157] 00:30:34.861 bw ( KiB/s): min= 96, max=13301, per=23.12%, avg=6294.17, stdev=5305.00, samples=6 00:30:34.861 iops : min= 24, max= 3325, avg=1573.50, stdev=1326.18, samples=6 00:30:34.861 lat (usec) : 250=8.79%, 500=84.84%, 750=5.68% 00:30:34.861 lat (msec) : 50=0.67% 00:30:34.861 cpu : usr=0.66%, sys=2.22%, ctx=5497, majf=0, minf=1 00:30:34.861 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.861 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.861 issued rwts: total=5493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.861 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:34.861 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770957: Wed Oct 30 12:42:07 2024 00:30:34.861 read: IOPS=1601, BW=6404KiB/s (6557kB/s)(24.0MiB/3836msec) 00:30:34.861 slat (usec): min=4, max=31757, avg=25.11, stdev=638.67 00:30:34.861 clat (usec): min=186, max=41098, avg=593.79, stdev=3561.32 00:30:34.861 lat (usec): min=191, max=68942, avg=618.89, stdev=3669.11 00:30:34.861 clat percentiles (usec): 00:30:34.861 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 227], 00:30:34.861 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 262], 00:30:34.861 | 70.00th=[ 277], 80.00th=[ 314], 90.00th=[ 396], 95.00th=[ 416], 00:30:34.861 | 99.00th=[ 586], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:34.861 | 99.99th=[41157] 00:30:34.861 bw ( KiB/s): min= 96, max=13952, per=24.97%, avg=6798.71, stdev=5972.85, samples=7 00:30:34.861 iops : min= 24, max= 3488, avg=1699.57, stdev=1493.35, samples=7 00:30:34.861 lat (usec) : 250=45.65%, 500=52.39%, 750=1.06% 00:30:34.861 lat (msec) : 2=0.03%, 4=0.02%, 10=0.03%, 20=0.03%, 50=0.77% 00:30:34.861 cpu : usr=0.39%, sys=1.59%, ctx=6147, majf=0, minf=2 00:30:34.861 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.861 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.861 issued rwts: total=6142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.861 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:34.861 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770959: Wed Oct 30 12:42:07 2024 00:30:34.861 read: IOPS=2226, BW=8905KiB/s (9118kB/s)(28.3MiB/3250msec) 00:30:34.861 slat (nsec): min=4740, max=51711, avg=11834.18, stdev=4496.66 00:30:34.861 clat (usec): min=185, max=45038, avg=431.60, stdev=2371.31 00:30:34.861 lat (usec): min=194, max=45057, avg=443.44, stdev=2372.21 00:30:34.861 clat percentiles (usec): 00:30:34.861 | 1.00th=[ 202], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 229], 00:30:34.861 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 273], 00:30:34.861 | 70.00th=[ 318], 80.00th=[ 375], 90.00th=[ 412], 95.00th=[ 461], 00:30:34.861 | 99.00th=[ 578], 99.50th=[ 660], 99.90th=[41157], 99.95th=[41157], 00:30:34.861 | 99.99th=[44827] 00:30:34.861 bw ( KiB/s): min= 103, max=15928, per=35.41%, avg=9638.50, stdev=5678.16, samples=6 00:30:34.861 iops : min= 25, max= 3982, avg=2409.50, stdev=1419.79, samples=6 00:30:34.861 lat (usec) : 250=51.31%, 500=46.25%, 750=2.02% 00:30:34.861 lat (msec) : 2=0.01%, 4=0.03%, 10=0.01%, 50=0.35% 00:30:34.861 cpu : usr=1.05%, sys=3.05%, ctx=7237, majf=0, minf=2 00:30:34.861 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.861 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.861 issued rwts: total=7236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.861 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:34.861 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770965: Wed Oct 30 12:42:07 2024 00:30:34.861 read: IOPS=2488, BW=9951KiB/s (10.2MB/s)(28.3MiB/2909msec) 00:30:34.861 slat (nsec): min=4428, max=45926, avg=9720.17, stdev=4345.79 00:30:34.861 clat (usec): min=209, max=42095, avg=386.57, stdev=2148.19 00:30:34.861 lat (usec): min=214, max=42123, avg=396.29, stdev=2149.14 00:30:34.861 clat percentiles (usec): 00:30:34.861 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:30:34.861 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 255], 00:30:34.861 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 379], 95.00th=[ 408], 00:30:34.862 | 99.00th=[ 502], 99.50th=[ 562], 99.90th=[41157], 99.95th=[41681], 00:30:34.862 | 99.99th=[42206] 00:30:34.862 bw ( KiB/s): min= 103, max=15136, per=33.85%, avg=9214.20, stdev=5672.43, samples=5 00:30:34.862 iops : min= 25, max= 3784, avg=2303.40, stdev=1418.41, samples=5 00:30:34.862 lat (usec) : 250=51.01%, 500=47.94%, 750=0.72% 00:30:34.862 lat (msec) : 2=0.03%, 10=0.01%, 50=0.28% 00:30:34.862 cpu : usr=1.13%, sys=2.68%, ctx=7239, majf=0, minf=2 00:30:34.862 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.862 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.862 issued rwts: total=7238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.862 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:34.862 00:30:34.862 Run status group 0 (all jobs): 00:30:34.862 READ: bw=26.6MiB/s (27.9MB/s), 6257KiB/s-9951KiB/s (6407kB/s-10.2MB/s), io=102MiB (107MB), run=2909-3836msec 00:30:34.862 00:30:34.862 Disk stats (read/write): 00:30:34.862 nvme0n1: ios=4964/0, merge=0/0, ticks=3161/0, in_queue=3161, util=94.48% 00:30:34.862 nvme0n2: ios=6141/0, merge=0/0, ticks=3616/0, in_queue=3616, util=93.45% 00:30:34.862 nvme0n3: ios=7277/0, merge=0/0, ticks=3736/0, in_queue=3736, util=99.46% 00:30:34.862 nvme0n4: ios=6958/0, merge=0/0, ticks=2668/0, in_queue=2668, util=96.70% 00:30:35.119 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:35.119 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:35.685 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:35.685 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:35.685 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:35.685 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:36.251 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:36.251 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:36.509 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:36.509 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 770800 00:30:36.509 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:36.509 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:36.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:36.509 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:36.509 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:30:36.509 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:36.509 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:36.509 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:36.509 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:36.509 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:30:36.509 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:36.509 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:36.509 nvmf hotplug test: fio failed as expected 00:30:36.509 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.767 rmmod nvme_tcp 00:30:36.767 rmmod nvme_fabrics 00:30:36.767 rmmod nvme_keyring 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 768740 ']' 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 768740 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 768740 ']' 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 768740 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:36.767 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 768740 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 768740' 00:30:37.025 killing process with pid 768740 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 768740 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 768740 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.025 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.558 00:30:39.558 real 0m24.201s 00:30:39.558 user 1m7.986s 00:30:39.558 sys 0m10.799s 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:39.558 ************************************ 00:30:39.558 END TEST nvmf_fio_target 00:30:39.558 ************************************ 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:39.558 ************************************ 00:30:39.558 START TEST nvmf_bdevio 00:30:39.558 ************************************ 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:39.558 * Looking for test storage... 00:30:39.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:39.558 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.559 --rc genhtml_branch_coverage=1 00:30:39.559 --rc genhtml_function_coverage=1 00:30:39.559 --rc genhtml_legend=1 00:30:39.559 --rc geninfo_all_blocks=1 00:30:39.559 --rc geninfo_unexecuted_blocks=1 00:30:39.559 00:30:39.559 ' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.559 --rc genhtml_branch_coverage=1 00:30:39.559 --rc genhtml_function_coverage=1 00:30:39.559 --rc genhtml_legend=1 00:30:39.559 --rc geninfo_all_blocks=1 00:30:39.559 --rc geninfo_unexecuted_blocks=1 00:30:39.559 00:30:39.559 ' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.559 --rc genhtml_branch_coverage=1 00:30:39.559 --rc genhtml_function_coverage=1 00:30:39.559 --rc genhtml_legend=1 00:30:39.559 --rc geninfo_all_blocks=1 00:30:39.559 --rc geninfo_unexecuted_blocks=1 00:30:39.559 00:30:39.559 ' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:39.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.559 --rc genhtml_branch_coverage=1 00:30:39.559 --rc genhtml_function_coverage=1 00:30:39.559 --rc genhtml_legend=1 00:30:39.559 --rc geninfo_all_blocks=1 00:30:39.559 --rc geninfo_unexecuted_blocks=1 00:30:39.559 00:30:39.559 ' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.559 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.464 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:41.465 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:41.465 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:41.465 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:41.465 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.465 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:30:41.725 00:30:41.725 --- 10.0.0.2 ping statistics --- 00:30:41.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.725 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:30:41.725 00:30:41.725 --- 10.0.0.1 ping statistics --- 00:30:41.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.725 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=774087 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 774087 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 774087 ']' 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:41.725 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:41.725 [2024-10-30 12:42:14.276814] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:41.725 [2024-10-30 12:42:14.277918] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:30:41.725 [2024-10-30 12:42:14.277977] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.725 [2024-10-30 12:42:14.353221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:41.984 [2024-10-30 12:42:14.415246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.984 [2024-10-30 12:42:14.415328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.984 [2024-10-30 12:42:14.415343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.984 [2024-10-30 12:42:14.415355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.984 [2024-10-30 12:42:14.415365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.984 [2024-10-30 12:42:14.417019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:41.984 [2024-10-30 12:42:14.417089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:41.984 [2024-10-30 12:42:14.417113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:41.984 [2024-10-30 12:42:14.417120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:41.984 [2024-10-30 12:42:14.514674] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:41.984 [2024-10-30 12:42:14.514878] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:41.984 [2024-10-30 12:42:14.515164] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:41.984 [2024-10-30 12:42:14.515823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:41.984 [2024-10-30 12:42:14.516036] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:41.984 [2024-10-30 12:42:14.565876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:41.984 Malloc0 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.984 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:41.985 [2024-10-30 12:42:14.642196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:41.985 { 00:30:41.985 "params": { 00:30:41.985 "name": "Nvme$subsystem", 00:30:41.985 "trtype": "$TEST_TRANSPORT", 00:30:41.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:41.985 "adrfam": "ipv4", 00:30:41.985 "trsvcid": "$NVMF_PORT", 00:30:41.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:41.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:41.985 "hdgst": ${hdgst:-false}, 00:30:41.985 "ddgst": ${ddgst:-false} 00:30:41.985 }, 00:30:41.985 "method": "bdev_nvme_attach_controller" 00:30:41.985 } 00:30:41.985 EOF 00:30:41.985 )") 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:30:41.985 12:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:41.985 "params": { 00:30:41.985 "name": "Nvme1", 00:30:41.985 "trtype": "tcp", 00:30:41.985 "traddr": "10.0.0.2", 00:30:41.985 "adrfam": "ipv4", 00:30:41.985 "trsvcid": "4420", 00:30:41.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:41.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:41.985 "hdgst": false, 00:30:41.985 "ddgst": false 00:30:41.985 }, 00:30:41.985 "method": "bdev_nvme_attach_controller" 00:30:41.985 }' 00:30:42.243 [2024-10-30 12:42:14.696920] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:30:42.243 [2024-10-30 12:42:14.697011] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774164 ] 00:30:42.243 [2024-10-30 12:42:14.769290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:42.243 [2024-10-30 12:42:14.832112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.243 [2024-10-30 12:42:14.832165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.243 [2024-10-30 12:42:14.832169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.500 I/O targets: 00:30:42.500 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:42.500 00:30:42.500 00:30:42.500 CUnit - A unit testing framework for C - Version 2.1-3 00:30:42.500 http://cunit.sourceforge.net/ 00:30:42.500 00:30:42.500 00:30:42.500 Suite: bdevio tests on: Nvme1n1 00:30:42.500 Test: blockdev write read block ...passed 00:30:42.500 Test: blockdev write zeroes read block ...passed 00:30:42.500 Test: blockdev write zeroes read no split ...passed 00:30:42.500 Test: blockdev write zeroes read split ...passed 00:30:42.500 Test: blockdev write zeroes read split partial ...passed 00:30:42.500 Test: blockdev reset ...[2024-10-30 12:42:15.147213] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:42.500 [2024-10-30 12:42:15.147332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fb640 (9): Bad file descriptor 00:30:42.758 [2024-10-30 12:42:15.280359] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:30:42.758 passed 00:30:42.758 Test: blockdev write read 8 blocks ...passed 00:30:42.758 Test: blockdev write read size > 128k ...passed 00:30:42.758 Test: blockdev write read invalid size ...passed 00:30:42.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:42.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:42.758 Test: blockdev write read max offset ...passed 00:30:42.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:42.758 Test: blockdev writev readv 8 blocks ...passed 00:30:42.758 Test: blockdev writev readv 30 x 1block ...passed 00:30:43.016 Test: blockdev writev readv block ...passed 00:30:43.016 Test: blockdev writev readv size > 128k ...passed 00:30:43.016 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:43.016 Test: blockdev comparev and writev ...[2024-10-30 12:42:15.491646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:43.017 [2024-10-30 12:42:15.491682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:43.017 [2024-10-30 12:42:15.491712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:43.017 [2024-10-30 12:42:15.491741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:43.017 [2024-10-30 12:42:15.492119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:43.017 [2024-10-30 12:42:15.492145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:43.017 [2024-10-30 12:42:15.492168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:43.017 [2024-10-30 12:42:15.492184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:43.017 [2024-10-30 12:42:15.492587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:43.017 [2024-10-30 12:42:15.492623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:43.017 [2024-10-30 12:42:15.492645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:43.017 [2024-10-30 12:42:15.492661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:43.017 [2024-10-30 12:42:15.493046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:43.017 [2024-10-30 12:42:15.493070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:43.017 [2024-10-30 12:42:15.493092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:43.017 [2024-10-30 12:42:15.493108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:43.017 passed 00:30:43.017 Test: blockdev nvme passthru rw ...passed 00:30:43.017 Test: blockdev nvme passthru vendor specific ...[2024-10-30 12:42:15.575533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:43.017 [2024-10-30 12:42:15.575562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:43.017 [2024-10-30 12:42:15.575728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:43.017 [2024-10-30 12:42:15.575753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:43.017 [2024-10-30 12:42:15.575911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:43.017 [2024-10-30 12:42:15.575937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:43.017 [2024-10-30 12:42:15.576100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:43.017 [2024-10-30 12:42:15.576125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:43.017 passed 00:30:43.017 Test: blockdev nvme admin passthru ...passed 00:30:43.017 Test: blockdev copy ...passed 00:30:43.017 00:30:43.017 Run Summary: Type Total Ran Passed Failed Inactive 00:30:43.017 suites 1 1 n/a 0 0 00:30:43.017 tests 23 23 23 0 0 00:30:43.017 asserts 152 152 152 0 n/a 00:30:43.017 00:30:43.017 Elapsed time = 1.170 seconds 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.277 rmmod nvme_tcp 00:30:43.277 rmmod nvme_fabrics 00:30:43.277 rmmod nvme_keyring 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 774087 ']' 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 774087 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 774087 ']' 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 774087 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 774087 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 774087' 00:30:43.277 killing process with pid 774087 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 774087 00:30:43.277 12:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 774087 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.537 12:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.074 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:46.074 00:30:46.074 real 0m6.444s 00:30:46.074 user 0m8.373s 00:30:46.074 sys 0m2.551s 00:30:46.074 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:46.074 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:46.074 ************************************ 00:30:46.074 END TEST nvmf_bdevio 00:30:46.074 ************************************ 00:30:46.074 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:46.074 00:30:46.074 real 3m55.369s 00:30:46.074 user 8m52.150s 00:30:46.074 sys 1m25.752s 00:30:46.074 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:46.074 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:46.074 ************************************ 00:30:46.074 END TEST nvmf_target_core_interrupt_mode 00:30:46.074 ************************************ 00:30:46.074 12:42:18 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:46.074 12:42:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:46.074 12:42:18 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:46.074 12:42:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.074 ************************************ 00:30:46.074 START TEST nvmf_interrupt 00:30:46.074 ************************************ 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:46.074 * Looking for test storage... 00:30:46.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.074 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.075 --rc genhtml_branch_coverage=1 00:30:46.075 --rc genhtml_function_coverage=1 00:30:46.075 --rc genhtml_legend=1 00:30:46.075 --rc geninfo_all_blocks=1 00:30:46.075 --rc geninfo_unexecuted_blocks=1 00:30:46.075 00:30:46.075 ' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.075 --rc genhtml_branch_coverage=1 00:30:46.075 --rc genhtml_function_coverage=1 00:30:46.075 --rc genhtml_legend=1 00:30:46.075 --rc geninfo_all_blocks=1 00:30:46.075 --rc geninfo_unexecuted_blocks=1 00:30:46.075 00:30:46.075 ' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.075 --rc genhtml_branch_coverage=1 00:30:46.075 --rc genhtml_function_coverage=1 00:30:46.075 --rc genhtml_legend=1 00:30:46.075 --rc geninfo_all_blocks=1 00:30:46.075 --rc geninfo_unexecuted_blocks=1 00:30:46.075 00:30:46.075 ' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.075 --rc genhtml_branch_coverage=1 00:30:46.075 --rc genhtml_function_coverage=1 00:30:46.075 --rc genhtml_legend=1 00:30:46.075 --rc geninfo_all_blocks=1 00:30:46.075 --rc geninfo_unexecuted_blocks=1 00:30:46.075 00:30:46.075 ' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:46.075 12:42:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:47.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:47.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:47.978 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:47.979 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:47.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:47.979 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:48.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:30:48.239 00:30:48.239 --- 10.0.0.2 ping statistics --- 00:30:48.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.239 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:30:48.239 00:30:48.239 --- 10.0.0.1 ping statistics --- 00:30:48.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.239 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=776324 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 776324 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 776324 ']' 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:48.239 12:42:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:48.239 [2024-10-30 12:42:20.840176] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:48.239 [2024-10-30 12:42:20.841274] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:30:48.239 [2024-10-30 12:42:20.841336] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.239 [2024-10-30 12:42:20.918740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:48.499 [2024-10-30 12:42:20.983205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.499 [2024-10-30 12:42:20.983286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.499 [2024-10-30 12:42:20.983302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.499 [2024-10-30 12:42:20.983313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.499 [2024-10-30 12:42:20.983323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.499 [2024-10-30 12:42:20.984795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.499 [2024-10-30 12:42:20.984801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.499 [2024-10-30 12:42:21.083724] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:48.499 [2024-10-30 12:42:21.083727] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:48.499 [2024-10-30 12:42:21.083981] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:48.499 5000+0 records in 00:30:48.499 5000+0 records out 00:30:48.499 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0142742 s, 717 MB/s 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.499 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:48.758 AIO0 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:48.758 [2024-10-30 12:42:21.193460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:48.758 [2024-10-30 12:42:21.217669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 776324 0 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 776324 0 idle 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=776324 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:48.758 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 776324 -w 256 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 776324 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.29 reactor_0' 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 776324 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.29 reactor_0 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 776324 1 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 776324 1 idle 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=776324 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 776324 -w 256 00:30:48.759 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 776334 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.00 reactor_1' 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 776334 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.00 reactor_1 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=776472 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 776324 0 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 776324 0 busy 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=776324 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 776324 -w 256 00:30:49.018 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:49.278 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 776324 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:00.29 reactor_0' 00:30:49.278 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 776324 root 20 0 128.2g 47616 34560 S 0.0 0.1 0:00.29 reactor_0 00:30:49.278 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:49.278 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:49.278 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:49.278 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:49.278 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:49.278 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:49.278 12:42:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 776324 -w 256 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 776324 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:02.51 reactor_0' 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 776324 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:02.51 reactor_0 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 776324 1 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 776324 1 busy 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=776324 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:50.219 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:50.478 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 776324 -w 256 00:30:50.478 12:42:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 776334 root 20 0 128.2g 48000 34560 R 93.3 0.1 0:01.23 reactor_1' 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 776334 root 20 0 128.2g 48000 34560 R 93.3 0.1 0:01.23 reactor_1 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:50.478 12:42:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 776472 00:31:00.500 Initializing NVMe Controllers 00:31:00.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.500 Controller IO queue size 256, less than required. 00:31:00.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:00.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:00.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:00.500 Initialization complete. Launching workers. 00:31:00.500 ======================================================== 00:31:00.500 Latency(us) 00:31:00.500 Device Information : IOPS MiB/s Average min max 00:31:00.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 12732.10 49.73 20122.21 4252.40 24349.25 00:31:00.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13251.90 51.77 19332.18 3985.11 22158.18 00:31:00.500 ======================================================== 00:31:00.500 Total : 25983.99 101.50 19719.29 3985.11 24349.25 00:31:00.500 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 776324 0 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 776324 0 idle 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=776324 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 776324 -w 256 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 776324 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:19.67 reactor_0' 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 776324 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:19.67 reactor_0 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:00.500 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 776324 1 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 776324 1 idle 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=776324 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 776324 -w 256 00:31:00.501 12:42:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 776334 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.40 reactor_1' 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 776334 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.40 reactor_1 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:31:00.501 12:42:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 776324 0 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 776324 0 idle 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=776324 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 776324 -w 256 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 776324 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:19.77 reactor_0' 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 776324 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:19.77 reactor_0 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 776324 1 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 776324 1 idle 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=776324 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 776324 -w 256 00:31:01.879 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 776334 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:09.43 reactor_1' 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 776334 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:09.43 reactor_1 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:02.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:02.138 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:02.139 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:31:02.139 12:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:02.139 12:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:02.139 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:02.139 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:02.139 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:02.139 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:02.139 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:02.139 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:02.139 rmmod nvme_tcp 00:31:02.399 rmmod nvme_fabrics 00:31:02.399 rmmod nvme_keyring 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 776324 ']' 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 776324 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 776324 ']' 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 776324 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 776324 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 776324' 00:31:02.399 killing process with pid 776324 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 776324 00:31:02.399 12:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 776324 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:02.659 12:42:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.567 12:42:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:04.567 00:31:04.567 real 0m18.904s 00:31:04.567 user 0m36.550s 00:31:04.567 sys 0m6.937s 00:31:04.567 12:42:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:04.567 12:42:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:04.567 ************************************ 00:31:04.567 END TEST nvmf_interrupt 00:31:04.567 ************************************ 00:31:04.567 00:31:04.567 real 24m59.331s 00:31:04.567 user 58m30.231s 00:31:04.567 sys 6m46.454s 00:31:04.567 12:42:37 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:04.567 12:42:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.567 ************************************ 00:31:04.567 END TEST nvmf_tcp 00:31:04.567 ************************************ 00:31:04.567 12:42:37 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:31:04.567 12:42:37 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:04.567 12:42:37 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:04.567 12:42:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:04.567 12:42:37 -- common/autotest_common.sh@10 -- # set +x 00:31:04.826 ************************************ 00:31:04.826 START TEST spdkcli_nvmf_tcp 00:31:04.827 ************************************ 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:04.827 * Looking for test storage... 00:31:04.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.827 --rc genhtml_branch_coverage=1 00:31:04.827 --rc genhtml_function_coverage=1 00:31:04.827 --rc genhtml_legend=1 00:31:04.827 --rc geninfo_all_blocks=1 00:31:04.827 --rc geninfo_unexecuted_blocks=1 00:31:04.827 00:31:04.827 ' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.827 --rc genhtml_branch_coverage=1 00:31:04.827 --rc genhtml_function_coverage=1 00:31:04.827 --rc genhtml_legend=1 00:31:04.827 --rc geninfo_all_blocks=1 00:31:04.827 --rc geninfo_unexecuted_blocks=1 00:31:04.827 00:31:04.827 ' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.827 --rc genhtml_branch_coverage=1 00:31:04.827 --rc genhtml_function_coverage=1 00:31:04.827 --rc genhtml_legend=1 00:31:04.827 --rc geninfo_all_blocks=1 00:31:04.827 --rc geninfo_unexecuted_blocks=1 00:31:04.827 00:31:04.827 ' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.827 --rc genhtml_branch_coverage=1 00:31:04.827 --rc genhtml_function_coverage=1 00:31:04.827 --rc genhtml_legend=1 00:31:04.827 --rc geninfo_all_blocks=1 00:31:04.827 --rc geninfo_unexecuted_blocks=1 00:31:04.827 00:31:04.827 ' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:04.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=778385 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 778385 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 778385 ']' 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:04.827 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.827 [2024-10-30 12:42:37.473212] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:31:04.827 [2024-10-30 12:42:37.473346] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778385 ] 00:31:05.086 [2024-10-30 12:42:37.542607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:05.086 [2024-10-30 12:42:37.606649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.086 [2024-10-30 12:42:37.606654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.086 12:42:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:05.086 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:05.086 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:05.086 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:05.086 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:05.086 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:05.086 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:05.086 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:05.086 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:05.086 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:05.086 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:05.086 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:05.086 ' 00:31:08.375 [2024-10-30 12:42:40.351822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.941 [2024-10-30 12:42:41.624148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:11.473 [2024-10-30 12:42:43.967310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:13.377 [2024-10-30 12:42:45.989741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:15.285 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:15.285 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:15.285 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:15.285 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:15.285 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:15.285 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:15.285 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:15.285 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:15.285 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:15.285 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:15.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:15.285 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:15.285 12:42:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:15.285 12:42:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:15.285 12:42:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:15.285 12:42:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:15.285 12:42:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:15.285 12:42:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:15.285 12:42:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:15.285 12:42:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:15.543 12:42:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:15.543 12:42:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:15.543 12:42:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:15.543 12:42:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:15.543 12:42:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:15.543 12:42:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:15.543 12:42:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:15.543 12:42:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:15.543 12:42:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:15.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:15.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:15.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:15.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:15.543 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:15.543 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:15.543 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:15.543 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:15.543 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:15.543 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:15.543 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:15.543 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:15.543 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:15.543 ' 00:31:20.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:20.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:20.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:20.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:20.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:20.820 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:20.820 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:20.820 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:20.820 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:20.820 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:20.821 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:20.821 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:20.821 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:20.821 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 778385 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 778385 ']' 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 778385 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 778385 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 778385' 00:31:21.080 killing process with pid 778385 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 778385 00:31:21.080 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 778385 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 778385 ']' 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 778385 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 778385 ']' 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 778385 00:31:21.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (778385) - No such process 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 778385 is not found' 00:31:21.339 Process with pid 778385 is not found 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:21.339 00:31:21.339 real 0m16.586s 00:31:21.339 user 0m35.334s 00:31:21.339 sys 0m0.779s 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:21.339 12:42:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:21.339 ************************************ 00:31:21.339 END TEST spdkcli_nvmf_tcp 00:31:21.339 ************************************ 00:31:21.339 12:42:53 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:21.339 12:42:53 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:21.339 12:42:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:21.339 12:42:53 -- common/autotest_common.sh@10 -- # set +x 00:31:21.339 ************************************ 00:31:21.339 START TEST nvmf_identify_passthru 00:31:21.339 ************************************ 00:31:21.339 12:42:53 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:21.339 * Looking for test storage... 00:31:21.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:21.339 12:42:53 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:21.339 12:42:53 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:31:21.339 12:42:53 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:21.339 12:42:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:21.598 12:42:54 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:21.598 12:42:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:21.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.598 --rc genhtml_branch_coverage=1 00:31:21.598 --rc genhtml_function_coverage=1 00:31:21.598 --rc genhtml_legend=1 00:31:21.598 --rc geninfo_all_blocks=1 00:31:21.598 --rc geninfo_unexecuted_blocks=1 00:31:21.598 00:31:21.598 ' 00:31:21.598 12:42:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:21.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.598 --rc genhtml_branch_coverage=1 00:31:21.598 --rc genhtml_function_coverage=1 00:31:21.598 --rc genhtml_legend=1 00:31:21.598 --rc geninfo_all_blocks=1 00:31:21.598 --rc geninfo_unexecuted_blocks=1 00:31:21.598 00:31:21.598 ' 00:31:21.598 12:42:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:21.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.598 --rc genhtml_branch_coverage=1 00:31:21.598 --rc genhtml_function_coverage=1 00:31:21.598 --rc genhtml_legend=1 00:31:21.598 --rc geninfo_all_blocks=1 00:31:21.598 --rc geninfo_unexecuted_blocks=1 00:31:21.598 00:31:21.598 ' 00:31:21.598 12:42:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:21.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.598 --rc genhtml_branch_coverage=1 00:31:21.598 --rc genhtml_function_coverage=1 00:31:21.598 --rc genhtml_legend=1 00:31:21.598 --rc geninfo_all_blocks=1 00:31:21.598 --rc geninfo_unexecuted_blocks=1 00:31:21.598 00:31:21.598 ' 00:31:21.598 12:42:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.598 12:42:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.598 12:42:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.598 12:42:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.598 12:42:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:21.598 12:42:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:21.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:21.598 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:21.598 12:42:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.598 12:42:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.599 12:42:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.599 12:42:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.599 12:42:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.599 12:42:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.599 12:42:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:21.599 12:42:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.599 12:42:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:21.599 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:21.599 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.599 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:21.599 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:21.599 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:21.599 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.599 12:42:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:21.599 12:42:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.599 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:21.599 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:21.599 12:42:54 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:21.599 12:42:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:23.506 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:23.506 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:23.506 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:23.506 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:23.507 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.507 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:23.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:31:23.765 00:31:23.765 --- 10.0.0.2 ping statistics --- 00:31:23.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.765 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:31:23.765 00:31:23.765 --- 10.0.0.1 ping statistics --- 00:31:23.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.765 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:23.765 12:42:56 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:23.765 12:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:23.765 12:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:31:23.765 12:42:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:31:23.765 12:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:31:23.765 12:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:31:23.765 12:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:23.765 12:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:23.765 12:42:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:27.962 12:43:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:31:27.962 12:43:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:31:27.962 12:43:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:27.962 12:43:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:32.154 12:43:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:32.154 12:43:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:32.154 12:43:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:32.154 12:43:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:32.154 12:43:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:32.154 12:43:04 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:32.154 12:43:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:32.154 12:43:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=783020 00:31:32.154 12:43:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:32.154 12:43:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:32.154 12:43:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 783020 00:31:32.154 12:43:04 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 783020 ']' 00:31:32.154 12:43:04 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.154 12:43:04 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:32.154 12:43:04 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.154 12:43:04 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:32.154 12:43:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:32.154 [2024-10-30 12:43:04.830193] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:31:32.154 [2024-10-30 12:43:04.830322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.412 [2024-10-30 12:43:04.902169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:32.412 [2024-10-30 12:43:04.960983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.412 [2024-10-30 12:43:04.961052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.412 [2024-10-30 12:43:04.961065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.412 [2024-10-30 12:43:04.961076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.412 [2024-10-30 12:43:04.961086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.412 [2024-10-30 12:43:04.962709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.412 [2024-10-30 12:43:04.962784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.412 [2024-10-30 12:43:04.962841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.412 [2024-10-30 12:43:04.962838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:32.412 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:32.412 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:31:32.412 12:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:32.412 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.412 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:32.412 INFO: Log level set to 20 00:31:32.412 INFO: Requests: 00:31:32.412 { 00:31:32.412 "jsonrpc": "2.0", 00:31:32.412 "method": "nvmf_set_config", 00:31:32.412 "id": 1, 00:31:32.412 "params": { 00:31:32.412 "admin_cmd_passthru": { 00:31:32.412 "identify_ctrlr": true 00:31:32.412 } 00:31:32.412 } 00:31:32.412 } 00:31:32.412 00:31:32.412 INFO: response: 00:31:32.412 { 00:31:32.412 "jsonrpc": "2.0", 00:31:32.412 "id": 1, 00:31:32.412 "result": true 00:31:32.412 } 00:31:32.412 00:31:32.412 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.412 12:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:32.412 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.412 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:32.412 INFO: Setting log level to 20 00:31:32.412 INFO: Setting log level to 20 00:31:32.412 INFO: Log level set to 20 00:31:32.412 INFO: Log level set to 20 00:31:32.412 INFO: Requests: 00:31:32.412 { 00:31:32.412 "jsonrpc": "2.0", 00:31:32.412 "method": "framework_start_init", 00:31:32.412 "id": 1 00:31:32.412 } 00:31:32.412 00:31:32.412 INFO: Requests: 00:31:32.412 { 00:31:32.412 "jsonrpc": "2.0", 00:31:32.412 "method": "framework_start_init", 00:31:32.412 "id": 1 00:31:32.412 } 00:31:32.412 00:31:32.672 [2024-10-30 12:43:05.168463] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:32.672 INFO: response: 00:31:32.672 { 00:31:32.672 "jsonrpc": "2.0", 00:31:32.672 "id": 1, 00:31:32.672 "result": true 00:31:32.672 } 00:31:32.672 00:31:32.672 INFO: response: 00:31:32.672 { 00:31:32.672 "jsonrpc": "2.0", 00:31:32.672 "id": 1, 00:31:32.672 "result": true 00:31:32.672 } 00:31:32.672 00:31:32.672 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.672 12:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:32.672 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.672 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:32.672 INFO: Setting log level to 40 00:31:32.672 INFO: Setting log level to 40 00:31:32.672 INFO: Setting log level to 40 00:31:32.672 [2024-10-30 12:43:05.178579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.672 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.672 12:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:32.672 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:32.672 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:32.672 12:43:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:31:32.672 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.672 12:43:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:35.961 Nvme0n1 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:35.961 [2024-10-30 12:43:08.073132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:35.961 [ 00:31:35.961 { 00:31:35.961 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:35.961 "subtype": "Discovery", 00:31:35.961 "listen_addresses": [], 00:31:35.961 "allow_any_host": true, 00:31:35.961 "hosts": [] 00:31:35.961 }, 00:31:35.961 { 00:31:35.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.961 "subtype": "NVMe", 00:31:35.961 "listen_addresses": [ 00:31:35.961 { 00:31:35.961 "trtype": "TCP", 00:31:35.961 "adrfam": "IPv4", 00:31:35.961 "traddr": "10.0.0.2", 00:31:35.961 "trsvcid": "4420" 00:31:35.961 } 00:31:35.961 ], 00:31:35.961 "allow_any_host": true, 00:31:35.961 "hosts": [], 00:31:35.961 "serial_number": "SPDK00000000000001", 00:31:35.961 "model_number": "SPDK bdev Controller", 00:31:35.961 "max_namespaces": 1, 00:31:35.961 "min_cntlid": 1, 00:31:35.961 "max_cntlid": 65519, 00:31:35.961 "namespaces": [ 00:31:35.961 { 00:31:35.961 "nsid": 1, 00:31:35.961 "bdev_name": "Nvme0n1", 00:31:35.961 "name": "Nvme0n1", 00:31:35.961 "nguid": "3793298CF50A48A48A78D63FA01ECE46", 00:31:35.961 "uuid": "3793298c-f50a-48a4-8a78-d63fa01ece46" 00:31:35.961 } 00:31:35.961 ] 00:31:35.961 } 00:31:35.961 ] 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:35.961 12:43:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.961 rmmod nvme_tcp 00:31:35.961 rmmod nvme_fabrics 00:31:35.961 rmmod nvme_keyring 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 783020 ']' 00:31:35.961 12:43:08 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 783020 00:31:35.961 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 783020 ']' 00:31:35.962 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 783020 00:31:35.962 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:31:35.962 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:35.962 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 783020 00:31:35.962 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:35.962 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:35.962 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 783020' 00:31:35.962 killing process with pid 783020 00:31:35.962 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 783020 00:31:35.962 12:43:08 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 783020 00:31:37.865 12:43:10 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.865 12:43:10 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.865 12:43:10 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.865 12:43:10 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:37.865 12:43:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:31:37.865 12:43:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.865 12:43:10 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.865 12:43:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.865 12:43:10 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.865 12:43:10 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.865 12:43:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:37.865 12:43:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.773 12:43:12 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:39.773 00:31:39.773 real 0m18.287s 00:31:39.773 user 0m26.526s 00:31:39.773 sys 0m3.171s 00:31:39.773 12:43:12 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:39.773 12:43:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:39.773 ************************************ 00:31:39.773 END TEST nvmf_identify_passthru 00:31:39.773 ************************************ 00:31:39.773 12:43:12 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:39.773 12:43:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:39.773 12:43:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:39.773 12:43:12 -- common/autotest_common.sh@10 -- # set +x 00:31:39.773 ************************************ 00:31:39.773 START TEST nvmf_dif 00:31:39.773 ************************************ 00:31:39.773 12:43:12 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:39.773 * Looking for test storage... 00:31:39.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:39.773 12:43:12 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:39.773 12:43:12 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:31:39.773 12:43:12 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:39.773 12:43:12 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.773 12:43:12 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:39.774 12:43:12 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.774 12:43:12 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.774 12:43:12 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.774 12:43:12 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:39.774 12:43:12 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.774 12:43:12 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:39.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.774 --rc genhtml_branch_coverage=1 00:31:39.774 --rc genhtml_function_coverage=1 00:31:39.774 --rc genhtml_legend=1 00:31:39.774 --rc geninfo_all_blocks=1 00:31:39.774 --rc geninfo_unexecuted_blocks=1 00:31:39.774 00:31:39.774 ' 00:31:39.774 12:43:12 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:39.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.774 --rc genhtml_branch_coverage=1 00:31:39.774 --rc genhtml_function_coverage=1 00:31:39.774 --rc genhtml_legend=1 00:31:39.774 --rc geninfo_all_blocks=1 00:31:39.774 --rc geninfo_unexecuted_blocks=1 00:31:39.774 00:31:39.774 ' 00:31:39.774 12:43:12 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:39.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.774 --rc genhtml_branch_coverage=1 00:31:39.774 --rc genhtml_function_coverage=1 00:31:39.774 --rc genhtml_legend=1 00:31:39.774 --rc geninfo_all_blocks=1 00:31:39.774 --rc geninfo_unexecuted_blocks=1 00:31:39.774 00:31:39.774 ' 00:31:39.774 12:43:12 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:39.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.774 --rc genhtml_branch_coverage=1 00:31:39.774 --rc genhtml_function_coverage=1 00:31:39.774 --rc genhtml_legend=1 00:31:39.774 --rc geninfo_all_blocks=1 00:31:39.774 --rc geninfo_unexecuted_blocks=1 00:31:39.774 00:31:39.774 ' 00:31:39.774 12:43:12 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.774 12:43:12 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.774 12:43:12 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.774 12:43:12 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.774 12:43:12 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.774 12:43:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.774 12:43:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.774 12:43:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.774 12:43:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:39.774 12:43:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:39.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:39.774 12:43:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:39.774 12:43:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:39.774 12:43:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:39.774 12:43:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:39.774 12:43:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.774 12:43:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:39.774 12:43:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:39.774 12:43:12 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:39.774 12:43:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:42.358 12:43:14 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.358 12:43:14 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:42.358 12:43:14 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:42.359 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:42.359 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:42.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:42.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:42.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:31:42.359 00:31:42.359 --- 10.0.0.2 ping statistics --- 00:31:42.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.359 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:31:42.359 00:31:42.359 --- 10.0.0.1 ping statistics --- 00:31:42.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.359 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:42.359 12:43:14 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:43.298 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:43.299 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:43.299 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:43.299 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:43.299 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:43.299 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:43.299 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:43.299 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:43.299 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:43.299 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:43.299 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:43.299 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:43.299 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:43.299 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:43.299 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:43.299 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:43.299 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:43.299 12:43:15 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.299 12:43:15 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:43.299 12:43:15 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:43.299 12:43:15 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.299 12:43:15 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:43.299 12:43:15 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:43.299 12:43:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:43.299 12:43:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:43.299 12:43:15 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:43.299 12:43:15 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:43.299 12:43:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:43.299 12:43:15 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=786295 00:31:43.299 12:43:15 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:43.299 12:43:15 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 786295 00:31:43.299 12:43:15 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 786295 ']' 00:31:43.299 12:43:15 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.299 12:43:15 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:43.299 12:43:15 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.299 12:43:15 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:43.299 12:43:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:43.299 [2024-10-30 12:43:15.933845] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:31:43.299 [2024-10-30 12:43:15.933931] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.557 [2024-10-30 12:43:16.002547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.557 [2024-10-30 12:43:16.055236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.557 [2024-10-30 12:43:16.055308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.557 [2024-10-30 12:43:16.055331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.557 [2024-10-30 12:43:16.055349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.557 [2024-10-30 12:43:16.055363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.557 [2024-10-30 12:43:16.055935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.557 12:43:16 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:43.557 12:43:16 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:31:43.557 12:43:16 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:43.557 12:43:16 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:43.557 12:43:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:43.557 12:43:16 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.557 12:43:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:43.557 12:43:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:43.557 12:43:16 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.557 12:43:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:43.814 [2024-10-30 12:43:16.241814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.814 12:43:16 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.814 12:43:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:43.814 12:43:16 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:43.814 12:43:16 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:43.814 12:43:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:43.814 ************************************ 00:31:43.814 START TEST fio_dif_1_default 00:31:43.814 ************************************ 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:43.814 bdev_null0 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.814 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:43.815 [2024-10-30 12:43:16.298080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.815 { 00:31:43.815 "params": { 00:31:43.815 "name": "Nvme$subsystem", 00:31:43.815 "trtype": "$TEST_TRANSPORT", 00:31:43.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.815 "adrfam": "ipv4", 00:31:43.815 "trsvcid": "$NVMF_PORT", 00:31:43.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.815 "hdgst": ${hdgst:-false}, 00:31:43.815 "ddgst": ${ddgst:-false} 00:31:43.815 }, 00:31:43.815 "method": "bdev_nvme_attach_controller" 00:31:43.815 } 00:31:43.815 EOF 00:31:43.815 )") 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:43.815 "params": { 00:31:43.815 "name": "Nvme0", 00:31:43.815 "trtype": "tcp", 00:31:43.815 "traddr": "10.0.0.2", 00:31:43.815 "adrfam": "ipv4", 00:31:43.815 "trsvcid": "4420", 00:31:43.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:43.815 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:43.815 "hdgst": false, 00:31:43.815 "ddgst": false 00:31:43.815 }, 00:31:43.815 "method": "bdev_nvme_attach_controller" 00:31:43.815 }' 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:43.815 12:43:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.072 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:44.072 fio-3.35 00:31:44.072 Starting 1 thread 00:31:56.263 00:31:56.263 filename0: (groupid=0, jobs=1): err= 0: pid=786524: Wed Oct 30 12:43:27 2024 00:31:56.263 read: IOPS=101, BW=405KiB/s (415kB/s)(4064KiB/10023msec) 00:31:56.263 slat (nsec): min=4262, max=73737, avg=8796.86, stdev=4218.07 00:31:56.263 clat (usec): min=580, max=47961, avg=39432.86, stdev=7863.83 00:31:56.263 lat (usec): min=586, max=47987, avg=39441.66, stdev=7863.12 00:31:56.263 clat percentiles (usec): 00:31:56.263 | 1.00th=[ 611], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:56.263 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:56.263 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:56.263 | 99.00th=[41681], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:31:56.263 | 99.99th=[47973] 00:31:56.263 bw ( KiB/s): min= 384, max= 544, per=99.64%, avg=404.80, stdev=37.83, samples=20 00:31:56.263 iops : min= 96, max= 136, avg=101.20, stdev= 9.46, samples=20 00:31:56.263 lat (usec) : 750=3.05%, 1000=0.89% 00:31:56.263 lat (msec) : 50=96.06% 00:31:56.263 cpu : usr=92.18%, sys=7.53%, ctx=24, majf=0, minf=312 00:31:56.263 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.263 issued rwts: total=1016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.263 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:56.263 00:31:56.263 Run status group 0 (all jobs): 00:31:56.263 READ: bw=405KiB/s (415kB/s), 405KiB/s-405KiB/s (415kB/s-415kB/s), io=4064KiB (4162kB), run=10023-10023msec 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.263 12:43:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.264 00:31:56.264 real 0m11.057s 00:31:56.264 user 0m10.354s 00:31:56.264 sys 0m1.000s 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 ************************************ 00:31:56.264 END TEST fio_dif_1_default 00:31:56.264 ************************************ 00:31:56.264 12:43:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:56.264 12:43:27 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:56.264 12:43:27 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:56.264 12:43:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 ************************************ 00:31:56.264 START TEST fio_dif_1_multi_subsystems 00:31:56.264 ************************************ 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 bdev_null0 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 [2024-10-30 12:43:27.410227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 bdev_null1 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:56.264 { 00:31:56.264 "params": { 00:31:56.264 "name": "Nvme$subsystem", 00:31:56.264 "trtype": "$TEST_TRANSPORT", 00:31:56.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.264 "adrfam": "ipv4", 00:31:56.264 "trsvcid": "$NVMF_PORT", 00:31:56.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.264 "hdgst": ${hdgst:-false}, 00:31:56.264 "ddgst": ${ddgst:-false} 00:31:56.264 }, 00:31:56.264 "method": "bdev_nvme_attach_controller" 00:31:56.264 } 00:31:56.264 EOF 00:31:56.264 )") 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:56.264 { 00:31:56.264 "params": { 00:31:56.264 "name": "Nvme$subsystem", 00:31:56.264 "trtype": "$TEST_TRANSPORT", 00:31:56.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.264 "adrfam": "ipv4", 00:31:56.264 "trsvcid": "$NVMF_PORT", 00:31:56.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.264 "hdgst": ${hdgst:-false}, 00:31:56.264 "ddgst": ${ddgst:-false} 00:31:56.264 }, 00:31:56.264 "method": "bdev_nvme_attach_controller" 00:31:56.264 } 00:31:56.264 EOF 00:31:56.264 )") 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:56.264 "params": { 00:31:56.264 "name": "Nvme0", 00:31:56.264 "trtype": "tcp", 00:31:56.264 "traddr": "10.0.0.2", 00:31:56.264 "adrfam": "ipv4", 00:31:56.264 "trsvcid": "4420", 00:31:56.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:56.264 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:56.264 "hdgst": false, 00:31:56.264 "ddgst": false 00:31:56.264 }, 00:31:56.264 "method": "bdev_nvme_attach_controller" 00:31:56.264 },{ 00:31:56.264 "params": { 00:31:56.264 "name": "Nvme1", 00:31:56.264 "trtype": "tcp", 00:31:56.264 "traddr": "10.0.0.2", 00:31:56.264 "adrfam": "ipv4", 00:31:56.264 "trsvcid": "4420", 00:31:56.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:56.264 "hdgst": false, 00:31:56.264 "ddgst": false 00:31:56.264 }, 00:31:56.264 "method": "bdev_nvme_attach_controller" 00:31:56.264 }' 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:56.264 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:56.265 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:56.265 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:56.265 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:56.265 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:56.265 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:56.265 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:56.265 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:56.265 12:43:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:56.265 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:56.265 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:56.265 fio-3.35 00:31:56.265 Starting 2 threads 00:32:06.228 00:32:06.228 filename0: (groupid=0, jobs=1): err= 0: pid=787923: Wed Oct 30 12:43:38 2024 00:32:06.228 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10026msec) 00:32:06.228 slat (nsec): min=6857, max=34477, avg=9411.48, stdev=3853.04 00:32:06.228 clat (usec): min=623, max=45719, avg=40892.22, stdev=3671.17 00:32:06.228 lat (usec): min=630, max=45753, avg=40901.63, stdev=3671.32 00:32:06.228 clat percentiles (usec): 00:32:06.228 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:06.228 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:06.228 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:32:06.228 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:32:06.228 | 99.99th=[45876] 00:32:06.228 bw ( KiB/s): min= 384, max= 416, per=32.57%, avg=390.40, stdev=13.13, samples=20 00:32:06.228 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:32:06.228 lat (usec) : 750=0.41% 00:32:06.228 lat (msec) : 2=0.41%, 50=99.18% 00:32:06.228 cpu : usr=94.64%, sys=5.06%, ctx=14, majf=0, minf=76 00:32:06.228 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.228 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.228 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:06.228 filename1: (groupid=0, jobs=1): err= 0: pid=787924: Wed Oct 30 12:43:38 2024 00:32:06.228 read: IOPS=201, BW=807KiB/s (826kB/s)(8096KiB/10036msec) 00:32:06.228 slat (nsec): min=6814, max=70602, avg=9261.88, stdev=3962.62 00:32:06.228 clat (usec): min=530, max=45725, avg=19805.19, stdev=20291.05 00:32:06.228 lat (usec): min=537, max=45759, avg=19814.45, stdev=20290.80 00:32:06.228 clat percentiles (usec): 00:32:06.228 | 1.00th=[ 570], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 635], 00:32:06.228 | 30.00th=[ 676], 40.00th=[ 725], 50.00th=[ 938], 60.00th=[41157], 00:32:06.228 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:06.228 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:32:06.228 | 99.99th=[45876] 00:32:06.228 bw ( KiB/s): min= 672, max= 960, per=67.49%, avg=808.00, stdev=75.49, samples=20 00:32:06.228 iops : min= 168, max= 240, avg=202.00, stdev=18.87, samples=20 00:32:06.228 lat (usec) : 750=42.59%, 1000=9.63% 00:32:06.228 lat (msec) : 2=0.74%, 50=47.04% 00:32:06.228 cpu : usr=94.98%, sys=4.71%, ctx=14, majf=0, minf=230 00:32:06.228 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:06.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.229 issued rwts: total=2024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.229 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:06.229 00:32:06.229 Run status group 0 (all jobs): 00:32:06.229 READ: bw=1197KiB/s (1226kB/s), 391KiB/s-807KiB/s (400kB/s-826kB/s), io=11.7MiB (12.3MB), run=10026-10036msec 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.229 00:32:06.229 real 0m11.477s 00:32:06.229 user 0m20.417s 00:32:06.229 sys 0m1.264s 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:06.229 12:43:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:06.229 ************************************ 00:32:06.229 END TEST fio_dif_1_multi_subsystems 00:32:06.229 ************************************ 00:32:06.229 12:43:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:06.229 12:43:38 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:06.229 12:43:38 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:06.229 12:43:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:06.229 ************************************ 00:32:06.229 START TEST fio_dif_rand_params 00:32:06.229 ************************************ 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:06.229 bdev_null0 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.229 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:06.488 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.488 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:06.489 [2024-10-30 12:43:38.928344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:06.489 { 00:32:06.489 "params": { 00:32:06.489 "name": "Nvme$subsystem", 00:32:06.489 "trtype": "$TEST_TRANSPORT", 00:32:06.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.489 "adrfam": "ipv4", 00:32:06.489 "trsvcid": "$NVMF_PORT", 00:32:06.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.489 "hdgst": ${hdgst:-false}, 00:32:06.489 "ddgst": ${ddgst:-false} 00:32:06.489 }, 00:32:06.489 "method": "bdev_nvme_attach_controller" 00:32:06.489 } 00:32:06.489 EOF 00:32:06.489 )") 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:06.489 "params": { 00:32:06.489 "name": "Nvme0", 00:32:06.489 "trtype": "tcp", 00:32:06.489 "traddr": "10.0.0.2", 00:32:06.489 "adrfam": "ipv4", 00:32:06.489 "trsvcid": "4420", 00:32:06.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:06.489 "hdgst": false, 00:32:06.489 "ddgst": false 00:32:06.489 }, 00:32:06.489 "method": "bdev_nvme_attach_controller" 00:32:06.489 }' 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:06.489 12:43:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:06.747 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:06.747 ... 00:32:06.747 fio-3.35 00:32:06.747 Starting 3 threads 00:32:13.300 00:32:13.300 filename0: (groupid=0, jobs=1): err= 0: pid=789323: Wed Oct 30 12:43:44 2024 00:32:13.300 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(142MiB/5003msec) 00:32:13.300 slat (nsec): min=7309, max=39160, avg=13266.44, stdev=2350.23 00:32:13.300 clat (usec): min=3784, max=51323, avg=13171.09, stdev=3342.61 00:32:13.300 lat (usec): min=3796, max=51362, avg=13184.35, stdev=3343.09 00:32:13.300 clat percentiles (usec): 00:32:13.300 | 1.00th=[ 5407], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11469], 00:32:13.300 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13042], 60.00th=[13566], 00:32:13.300 | 70.00th=[14091], 80.00th=[14746], 90.00th=[15533], 95.00th=[16188], 00:32:13.300 | 99.00th=[17695], 99.50th=[47449], 99.90th=[51119], 99.95th=[51119], 00:32:13.300 | 99.99th=[51119] 00:32:13.300 bw ( KiB/s): min=28160, max=30976, per=32.68%, avg=29087.20, stdev=788.15, samples=10 00:32:13.300 iops : min= 220, max= 242, avg=227.20, stdev= 6.20, samples=10 00:32:13.300 lat (msec) : 4=0.09%, 10=5.45%, 20=93.94%, 50=0.26%, 100=0.26% 00:32:13.300 cpu : usr=92.74%, sys=6.72%, ctx=10, majf=0, minf=154 00:32:13.300 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.300 issued rwts: total=1138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.300 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.300 filename0: (groupid=0, jobs=1): err= 0: pid=789324: Wed Oct 30 12:43:44 2024 00:32:13.300 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(144MiB/5044msec) 00:32:13.300 slat (nsec): min=7688, max=61948, avg=13281.79, stdev=2867.74 00:32:13.300 clat (usec): min=4516, max=48792, avg=13070.44, stdev=3134.63 00:32:13.300 lat (usec): min=4529, max=48804, avg=13083.72, stdev=3134.67 00:32:13.300 clat percentiles (usec): 00:32:13.300 | 1.00th=[ 5342], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11207], 00:32:13.300 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13042], 60.00th=[13566], 00:32:13.300 | 70.00th=[14091], 80.00th=[14877], 90.00th=[15664], 95.00th=[16057], 00:32:13.300 | 99.00th=[17171], 99.50th=[18220], 99.90th=[48497], 99.95th=[49021], 00:32:13.300 | 99.99th=[49021] 00:32:13.300 bw ( KiB/s): min=28160, max=32256, per=33.11%, avg=29465.60, stdev=1285.39, samples=10 00:32:13.300 iops : min= 220, max= 252, avg=230.20, stdev=10.04, samples=10 00:32:13.300 lat (msec) : 10=6.94%, 20=92.63%, 50=0.43% 00:32:13.300 cpu : usr=92.35%, sys=7.12%, ctx=15, majf=0, minf=93 00:32:13.300 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.300 issued rwts: total=1153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.300 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.300 filename0: (groupid=0, jobs=1): err= 0: pid=789325: Wed Oct 30 12:43:44 2024 00:32:13.300 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(152MiB/5004msec) 00:32:13.300 slat (nsec): min=7336, max=61436, avg=13006.40, stdev=2930.34 00:32:13.300 clat (usec): min=5618, max=51634, avg=12326.54, stdev=4034.71 00:32:13.300 lat (usec): min=5625, max=51647, avg=12339.55, stdev=4034.55 00:32:13.300 clat percentiles (usec): 00:32:13.300 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10683], 00:32:13.300 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:32:13.300 | 70.00th=[12649], 80.00th=[13173], 90.00th=[14091], 95.00th=[14877], 00:32:13.300 | 99.00th=[16909], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:32:13.300 | 99.99th=[51643] 00:32:13.300 bw ( KiB/s): min=26880, max=32256, per=34.92%, avg=31078.40, stdev=1698.97, samples=10 00:32:13.300 iops : min= 210, max= 252, avg=242.80, stdev=13.27, samples=10 00:32:13.300 lat (msec) : 10=7.15%, 20=91.86%, 50=0.41%, 100=0.58% 00:32:13.300 cpu : usr=92.30%, sys=7.16%, ctx=6, majf=0, minf=125 00:32:13.300 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.300 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.300 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:13.300 00:32:13.300 Run status group 0 (all jobs): 00:32:13.300 READ: bw=86.9MiB/s (91.1MB/s), 28.4MiB/s-30.4MiB/s (29.8MB/s-31.9MB/s), io=438MiB (460MB), run=5003-5044msec 00:32:13.300 12:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:13.300 12:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:13.300 12:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:13.300 12:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:13.300 12:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:13.300 12:43:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.300 12:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.300 12:43:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.300 bdev_null0 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.300 [2024-10-30 12:43:45.037556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.300 bdev_null1 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:13.300 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.301 bdev_null2 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:13.301 { 00:32:13.301 "params": { 00:32:13.301 "name": "Nvme$subsystem", 00:32:13.301 "trtype": "$TEST_TRANSPORT", 00:32:13.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.301 "adrfam": "ipv4", 00:32:13.301 "trsvcid": "$NVMF_PORT", 00:32:13.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.301 "hdgst": ${hdgst:-false}, 00:32:13.301 "ddgst": ${ddgst:-false} 00:32:13.301 }, 00:32:13.301 "method": "bdev_nvme_attach_controller" 00:32:13.301 } 00:32:13.301 EOF 00:32:13.301 )") 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:13.301 { 00:32:13.301 "params": { 00:32:13.301 "name": "Nvme$subsystem", 00:32:13.301 "trtype": "$TEST_TRANSPORT", 00:32:13.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.301 "adrfam": "ipv4", 00:32:13.301 "trsvcid": "$NVMF_PORT", 00:32:13.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.301 "hdgst": ${hdgst:-false}, 00:32:13.301 "ddgst": ${ddgst:-false} 00:32:13.301 }, 00:32:13.301 "method": "bdev_nvme_attach_controller" 00:32:13.301 } 00:32:13.301 EOF 00:32:13.301 )") 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:13.301 { 00:32:13.301 "params": { 00:32:13.301 "name": "Nvme$subsystem", 00:32:13.301 "trtype": "$TEST_TRANSPORT", 00:32:13.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.301 "adrfam": "ipv4", 00:32:13.301 "trsvcid": "$NVMF_PORT", 00:32:13.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.301 "hdgst": ${hdgst:-false}, 00:32:13.301 "ddgst": ${ddgst:-false} 00:32:13.301 }, 00:32:13.301 "method": "bdev_nvme_attach_controller" 00:32:13.301 } 00:32:13.301 EOF 00:32:13.301 )") 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:13.301 "params": { 00:32:13.301 "name": "Nvme0", 00:32:13.301 "trtype": "tcp", 00:32:13.301 "traddr": "10.0.0.2", 00:32:13.301 "adrfam": "ipv4", 00:32:13.301 "trsvcid": "4420", 00:32:13.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:13.301 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:13.301 "hdgst": false, 00:32:13.301 "ddgst": false 00:32:13.301 }, 00:32:13.301 "method": "bdev_nvme_attach_controller" 00:32:13.301 },{ 00:32:13.301 "params": { 00:32:13.301 "name": "Nvme1", 00:32:13.301 "trtype": "tcp", 00:32:13.301 "traddr": "10.0.0.2", 00:32:13.301 "adrfam": "ipv4", 00:32:13.301 "trsvcid": "4420", 00:32:13.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.301 "hdgst": false, 00:32:13.301 "ddgst": false 00:32:13.301 }, 00:32:13.301 "method": "bdev_nvme_attach_controller" 00:32:13.301 },{ 00:32:13.301 "params": { 00:32:13.301 "name": "Nvme2", 00:32:13.301 "trtype": "tcp", 00:32:13.301 "traddr": "10.0.0.2", 00:32:13.301 "adrfam": "ipv4", 00:32:13.301 "trsvcid": "4420", 00:32:13.301 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:13.301 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:13.301 "hdgst": false, 00:32:13.301 "ddgst": false 00:32:13.301 }, 00:32:13.301 "method": "bdev_nvme_attach_controller" 00:32:13.301 }' 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:13.301 12:43:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.301 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:13.301 ... 00:32:13.301 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:13.301 ... 00:32:13.301 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:13.301 ... 00:32:13.301 fio-3.35 00:32:13.301 Starting 24 threads 00:32:25.496 00:32:25.496 filename0: (groupid=0, jobs=1): err= 0: pid=790188: Wed Oct 30 12:43:56 2024 00:32:25.496 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10001msec) 00:32:25.496 slat (usec): min=8, max=142, avg=24.25, stdev=14.99 00:32:25.496 clat (usec): min=7529, max=43662, avg=33350.88, stdev=2561.64 00:32:25.496 lat (usec): min=7540, max=43686, avg=33375.13, stdev=2562.16 00:32:25.496 clat percentiles (usec): 00:32:25.496 | 1.00th=[15926], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:25.496 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:25.496 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.496 | 99.00th=[39060], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:32:25.496 | 99.99th=[43779] 00:32:25.496 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1906.53, stdev=58.73, samples=19 00:32:25.496 iops : min= 448, max= 512, avg=476.63, stdev=14.68, samples=19 00:32:25.496 lat (msec) : 10=0.44%, 20=0.61%, 50=98.95% 00:32:25.496 cpu : usr=97.15%, sys=1.93%, ctx=166, majf=0, minf=27 00:32:25.496 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:25.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.496 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.496 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.496 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.496 filename0: (groupid=0, jobs=1): err= 0: pid=790189: Wed Oct 30 12:43:56 2024 00:32:25.496 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:32:25.496 slat (usec): min=10, max=130, avg=42.43, stdev=17.06 00:32:25.496 clat (usec): min=14794, max=56287, avg=33447.65, stdev=2023.71 00:32:25.496 lat (usec): min=14821, max=56316, avg=33490.08, stdev=2022.65 00:32:25.496 clat percentiles (usec): 00:32:25.496 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:32:25.496 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:32:25.496 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.496 | 99.00th=[41157], 99.50th=[43254], 99.90th=[56361], 99.95th=[56361], 00:32:25.496 | 99.99th=[56361] 00:32:25.496 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1886.47, stdev=71.42, samples=19 00:32:25.496 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:32:25.496 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:32:25.496 cpu : usr=98.21%, sys=1.36%, ctx=24, majf=0, minf=27 00:32:25.496 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.496 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.496 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.496 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.496 filename0: (groupid=0, jobs=1): err= 0: pid=790190: Wed Oct 30 12:43:56 2024 00:32:25.496 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10012msec) 00:32:25.496 slat (usec): min=6, max=112, avg=32.15, stdev=16.71 00:32:25.496 clat (usec): min=14078, max=77565, avg=31873.24, stdev=4857.11 00:32:25.496 lat (usec): min=14114, max=77586, avg=31905.39, stdev=4859.80 00:32:25.496 clat percentiles (usec): 00:32:25.496 | 1.00th=[20055], 5.00th=[21627], 10.00th=[23725], 20.00th=[32637], 00:32:25.496 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:32:25.496 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:32:25.496 | 99.00th=[43779], 99.50th=[51643], 99.90th=[55837], 99.95th=[55837], 00:32:25.496 | 99.99th=[77071] 00:32:25.496 bw ( KiB/s): min= 1667, max= 2448, per=4.32%, avg=1969.84, stdev=185.01, samples=19 00:32:25.496 iops : min= 416, max= 612, avg=492.42, stdev=46.32, samples=19 00:32:25.496 lat (msec) : 20=1.00%, 50=98.32%, 100=0.68% 00:32:25.496 cpu : usr=98.10%, sys=1.47%, ctx=14, majf=0, minf=21 00:32:25.496 IO depths : 1=2.7%, 2=7.4%, 4=20.0%, 8=59.7%, 16=10.1%, 32=0.0%, >=64=0.0% 00:32:25.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.496 complete : 0=0.0%, 4=92.9%, 8=1.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.496 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.496 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.496 filename0: (groupid=0, jobs=1): err= 0: pid=790191: Wed Oct 30 12:43:56 2024 00:32:25.496 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10008msec) 00:32:25.496 slat (usec): min=4, max=111, avg=40.58, stdev=18.73 00:32:25.496 clat (usec): min=18941, max=63354, avg=33444.22, stdev=1629.22 00:32:25.497 lat (usec): min=18967, max=63366, avg=33484.79, stdev=1627.79 00:32:25.497 clat percentiles (usec): 00:32:25.497 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:32:25.497 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:32:25.497 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.497 | 99.00th=[40633], 99.50th=[43779], 99.90th=[45876], 99.95th=[45876], 00:32:25.497 | 99.99th=[63177] 00:32:25.497 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1886.32, stdev=57.91, samples=19 00:32:25.497 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:32:25.497 lat (msec) : 20=0.34%, 50=99.62%, 100=0.04% 00:32:25.497 cpu : usr=98.04%, sys=1.51%, ctx=16, majf=0, minf=18 00:32:25.497 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:32:25.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.497 filename0: (groupid=0, jobs=1): err= 0: pid=790192: Wed Oct 30 12:43:56 2024 00:32:25.497 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:32:25.497 slat (nsec): min=10907, max=93658, avg=36314.75, stdev=12342.80 00:32:25.497 clat (usec): min=27126, max=43593, avg=33470.14, stdev=1090.77 00:32:25.497 lat (usec): min=27140, max=43658, avg=33506.45, stdev=1091.71 00:32:25.497 clat percentiles (usec): 00:32:25.497 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:32:25.497 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:32:25.497 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.497 | 99.00th=[39060], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:32:25.497 | 99.99th=[43779] 00:32:25.497 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1886.32, stdev=57.91, samples=19 00:32:25.497 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:32:25.497 lat (msec) : 50=100.00% 00:32:25.497 cpu : usr=97.21%, sys=1.70%, ctx=132, majf=0, minf=24 00:32:25.497 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.497 filename0: (groupid=0, jobs=1): err= 0: pid=790193: Wed Oct 30 12:43:56 2024 00:32:25.497 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10010msec) 00:32:25.497 slat (usec): min=9, max=107, avg=39.45, stdev=18.70 00:32:25.497 clat (usec): min=17675, max=89744, avg=33548.72, stdev=3234.00 00:32:25.497 lat (usec): min=17727, max=89769, avg=33588.17, stdev=3232.84 00:32:25.497 clat percentiles (usec): 00:32:25.497 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:32:25.497 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:32:25.497 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.497 | 99.00th=[42206], 99.50th=[43254], 99.90th=[82314], 99.95th=[82314], 00:32:25.497 | 99.99th=[89654] 00:32:25.497 bw ( KiB/s): min= 1664, max= 1920, per=4.13%, avg=1879.58, stdev=74.55, samples=19 00:32:25.497 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:32:25.497 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:32:25.497 cpu : usr=95.83%, sys=2.48%, ctx=566, majf=0, minf=17 00:32:25.497 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:25.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.497 filename0: (groupid=0, jobs=1): err= 0: pid=790194: Wed Oct 30 12:43:56 2024 00:32:25.497 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:32:25.497 slat (nsec): min=8422, max=87383, avg=33387.12, stdev=11479.29 00:32:25.497 clat (usec): min=20710, max=45064, avg=33534.64, stdev=1431.76 00:32:25.497 lat (usec): min=20721, max=45080, avg=33568.03, stdev=1432.34 00:32:25.497 clat percentiles (usec): 00:32:25.497 | 1.00th=[27657], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:25.497 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.497 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.497 | 99.00th=[40633], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:32:25.497 | 99.99th=[44827] 00:32:25.497 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1886.32, stdev=54.36, samples=19 00:32:25.497 iops : min= 448, max= 480, avg=471.58, stdev=13.59, samples=19 00:32:25.497 lat (msec) : 50=100.00% 00:32:25.497 cpu : usr=98.04%, sys=1.55%, ctx=13, majf=0, minf=39 00:32:25.497 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:32:25.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.497 filename0: (groupid=0, jobs=1): err= 0: pid=790195: Wed Oct 30 12:43:56 2024 00:32:25.497 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10002msec) 00:32:25.497 slat (usec): min=7, max=129, avg=20.76, stdev=20.28 00:32:25.497 clat (usec): min=14045, max=43680, avg=33496.17, stdev=1846.73 00:32:25.497 lat (usec): min=14063, max=43697, avg=33516.93, stdev=1844.43 00:32:25.497 clat percentiles (usec): 00:32:25.497 | 1.00th=[22676], 5.00th=[32637], 10.00th=[33162], 20.00th=[33162], 00:32:25.497 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:25.497 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.497 | 99.00th=[40633], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:32:25.497 | 99.99th=[43779] 00:32:25.497 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1899.79, stdev=47.95, samples=19 00:32:25.497 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:32:25.497 lat (msec) : 20=0.67%, 50=99.33% 00:32:25.497 cpu : usr=98.25%, sys=1.35%, ctx=10, majf=0, minf=42 00:32:25.497 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.497 filename1: (groupid=0, jobs=1): err= 0: pid=790196: Wed Oct 30 12:43:56 2024 00:32:25.497 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:32:25.497 slat (nsec): min=6371, max=99705, avg=39434.16, stdev=16819.87 00:32:25.497 clat (usec): min=14783, max=69377, avg=33478.42, stdev=2571.36 00:32:25.497 lat (usec): min=14817, max=69407, avg=33517.86, stdev=2570.88 00:32:25.497 clat percentiles (usec): 00:32:25.497 | 1.00th=[22152], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:32:25.497 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.497 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.497 | 99.00th=[43779], 99.50th=[45876], 99.90th=[56886], 99.95th=[56886], 00:32:25.497 | 99.99th=[69731] 00:32:25.497 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1885.47, stdev=71.61, samples=19 00:32:25.497 iops : min= 416, max= 480, avg=471.37, stdev=17.90, samples=19 00:32:25.497 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:32:25.497 cpu : usr=97.63%, sys=1.77%, ctx=53, majf=0, minf=20 00:32:25.497 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:25.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.497 filename1: (groupid=0, jobs=1): err= 0: pid=790197: Wed Oct 30 12:43:56 2024 00:32:25.497 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:32:25.497 slat (nsec): min=8683, max=90444, avg=35589.44, stdev=11403.58 00:32:25.497 clat (usec): min=27127, max=43644, avg=33489.32, stdev=1089.44 00:32:25.497 lat (usec): min=27137, max=43665, avg=33524.91, stdev=1089.73 00:32:25.497 clat percentiles (usec): 00:32:25.497 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:32:25.497 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.497 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.497 | 99.00th=[39060], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:32:25.497 | 99.99th=[43779] 00:32:25.497 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1886.32, stdev=57.91, samples=19 00:32:25.497 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:32:25.497 lat (msec) : 50=100.00% 00:32:25.497 cpu : usr=97.89%, sys=1.44%, ctx=95, majf=0, minf=32 00:32:25.497 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.497 filename1: (groupid=0, jobs=1): err= 0: pid=790198: Wed Oct 30 12:43:56 2024 00:32:25.497 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:32:25.497 slat (nsec): min=6680, max=75166, avg=32676.32, stdev=11729.98 00:32:25.497 clat (usec): min=14786, max=57012, avg=33524.11, stdev=2054.87 00:32:25.497 lat (usec): min=14836, max=57029, avg=33556.78, stdev=2053.14 00:32:25.497 clat percentiles (usec): 00:32:25.497 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:32:25.497 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.497 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.497 | 99.00th=[41157], 99.50th=[43254], 99.90th=[56886], 99.95th=[56886], 00:32:25.497 | 99.99th=[56886] 00:32:25.497 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1886.32, stdev=71.93, samples=19 00:32:25.497 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:32:25.497 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:32:25.497 cpu : usr=97.03%, sys=1.95%, ctx=230, majf=0, minf=17 00:32:25.497 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.497 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.497 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.497 filename1: (groupid=0, jobs=1): err= 0: pid=790199: Wed Oct 30 12:43:56 2024 00:32:25.497 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10001msec) 00:32:25.497 slat (usec): min=7, max=126, avg=36.33, stdev=18.65 00:32:25.497 clat (usec): min=6847, max=43667, avg=33256.57, stdev=2574.52 00:32:25.498 lat (usec): min=6879, max=43686, avg=33292.90, stdev=2575.09 00:32:25.498 clat percentiles (usec): 00:32:25.498 | 1.00th=[15926], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:32:25.498 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.498 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.498 | 99.00th=[39060], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:32:25.498 | 99.99th=[43779] 00:32:25.498 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1906.53, stdev=58.73, samples=19 00:32:25.498 iops : min= 448, max= 512, avg=476.63, stdev=14.68, samples=19 00:32:25.498 lat (msec) : 10=0.34%, 20=0.67%, 50=98.99% 00:32:25.498 cpu : usr=96.74%, sys=2.17%, ctx=140, majf=0, minf=27 00:32:25.498 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:25.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.498 filename1: (groupid=0, jobs=1): err= 0: pid=790200: Wed Oct 30 12:43:56 2024 00:32:25.498 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:32:25.498 slat (usec): min=8, max=106, avg=35.34, stdev=11.99 00:32:25.498 clat (usec): min=14681, max=55913, avg=33492.49, stdev=2477.61 00:32:25.498 lat (usec): min=14711, max=55928, avg=33527.83, stdev=2476.62 00:32:25.498 clat percentiles (usec): 00:32:25.498 | 1.00th=[22152], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:32:25.498 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:32:25.498 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.498 | 99.00th=[44827], 99.50th=[45876], 99.90th=[55837], 99.95th=[55837], 00:32:25.498 | 99.99th=[55837] 00:32:25.498 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1886.32, stdev=71.93, samples=19 00:32:25.498 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:32:25.498 lat (msec) : 20=0.38%, 50=99.28%, 100=0.34% 00:32:25.498 cpu : usr=97.82%, sys=1.52%, ctx=81, majf=0, minf=24 00:32:25.498 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:32:25.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.498 filename1: (groupid=0, jobs=1): err= 0: pid=790201: Wed Oct 30 12:43:56 2024 00:32:25.498 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:32:25.498 slat (nsec): min=9132, max=89514, avg=35537.20, stdev=12172.85 00:32:25.498 clat (usec): min=25294, max=44260, avg=33495.36, stdev=1113.14 00:32:25.498 lat (usec): min=25350, max=44275, avg=33530.90, stdev=1113.70 00:32:25.498 clat percentiles (usec): 00:32:25.498 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:32:25.498 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.498 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.498 | 99.00th=[39060], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:32:25.498 | 99.99th=[44303] 00:32:25.498 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1886.32, stdev=57.91, samples=19 00:32:25.498 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:32:25.498 lat (msec) : 50=100.00% 00:32:25.498 cpu : usr=98.38%, sys=1.21%, ctx=13, majf=0, minf=24 00:32:25.498 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:25.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.498 filename1: (groupid=0, jobs=1): err= 0: pid=790202: Wed Oct 30 12:43:56 2024 00:32:25.498 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10011msec) 00:32:25.498 slat (usec): min=6, max=172, avg=29.63, stdev=18.18 00:32:25.498 clat (usec): min=21384, max=43905, avg=33458.88, stdev=1499.89 00:32:25.498 lat (usec): min=21424, max=43924, avg=33488.51, stdev=1500.00 00:32:25.498 clat percentiles (usec): 00:32:25.498 | 1.00th=[23462], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:25.498 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:25.498 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.498 | 99.00th=[40109], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:32:25.498 | 99.99th=[43779] 00:32:25.498 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1894.40, stdev=52.53, samples=20 00:32:25.498 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:32:25.498 lat (msec) : 50=100.00% 00:32:25.498 cpu : usr=97.82%, sys=1.53%, ctx=78, majf=0, minf=22 00:32:25.498 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.498 filename1: (groupid=0, jobs=1): err= 0: pid=790203: Wed Oct 30 12:43:56 2024 00:32:25.498 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:32:25.498 slat (usec): min=10, max=114, avg=44.89, stdev=18.40 00:32:25.498 clat (usec): min=14844, max=69160, avg=33430.35, stdev=2157.43 00:32:25.498 lat (usec): min=14879, max=69190, avg=33475.24, stdev=2156.35 00:32:25.498 clat percentiles (usec): 00:32:25.498 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:32:25.498 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:32:25.498 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.498 | 99.00th=[41157], 99.50th=[43779], 99.90th=[56886], 99.95th=[56886], 00:32:25.498 | 99.99th=[68682] 00:32:25.498 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1886.47, stdev=71.42, samples=19 00:32:25.498 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:32:25.498 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:32:25.498 cpu : usr=98.18%, sys=1.41%, ctx=14, majf=0, minf=21 00:32:25.498 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:25.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.498 filename2: (groupid=0, jobs=1): err= 0: pid=790204: Wed Oct 30 12:43:56 2024 00:32:25.498 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10018msec) 00:32:25.498 slat (nsec): min=5493, max=90086, avg=32726.55, stdev=13820.25 00:32:25.498 clat (usec): min=18820, max=43898, avg=33450.68, stdev=1471.08 00:32:25.498 lat (usec): min=18854, max=43920, avg=33483.41, stdev=1471.81 00:32:25.498 clat percentiles (usec): 00:32:25.498 | 1.00th=[30016], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:25.498 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.498 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.498 | 99.00th=[40109], 99.50th=[40633], 99.90th=[43779], 99.95th=[43779], 00:32:25.498 | 99.99th=[43779] 00:32:25.498 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.05, stdev=53.61, samples=19 00:32:25.498 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:32:25.498 lat (msec) : 20=0.38%, 50=99.62% 00:32:25.498 cpu : usr=98.09%, sys=1.51%, ctx=14, majf=0, minf=24 00:32:25.498 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:25.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.498 filename2: (groupid=0, jobs=1): err= 0: pid=790205: Wed Oct 30 12:43:56 2024 00:32:25.498 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10001msec) 00:32:25.498 slat (nsec): min=5713, max=91395, avg=29750.47, stdev=13882.60 00:32:25.498 clat (usec): min=7675, max=43689, avg=33322.10, stdev=2582.62 00:32:25.498 lat (usec): min=7683, max=43707, avg=33351.85, stdev=2582.39 00:32:25.498 clat percentiles (usec): 00:32:25.498 | 1.00th=[15795], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:25.498 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:25.498 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.498 | 99.00th=[38536], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:32:25.498 | 99.99th=[43779] 00:32:25.498 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1906.53, stdev=58.73, samples=19 00:32:25.498 iops : min= 448, max= 512, avg=476.63, stdev=14.68, samples=19 00:32:25.498 lat (msec) : 10=0.34%, 20=0.69%, 50=98.97% 00:32:25.498 cpu : usr=97.34%, sys=1.72%, ctx=123, majf=0, minf=44 00:32:25.498 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.498 filename2: (groupid=0, jobs=1): err= 0: pid=790206: Wed Oct 30 12:43:56 2024 00:32:25.498 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10008msec) 00:32:25.498 slat (nsec): min=4132, max=92835, avg=33259.74, stdev=15357.83 00:32:25.498 clat (usec): min=18961, max=45109, avg=33495.22, stdev=1465.09 00:32:25.498 lat (usec): min=18969, max=45121, avg=33528.48, stdev=1465.53 00:32:25.498 clat percentiles (usec): 00:32:25.498 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:25.498 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.498 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.498 | 99.00th=[40633], 99.50th=[43779], 99.90th=[44827], 99.95th=[45351], 00:32:25.498 | 99.99th=[45351] 00:32:25.498 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1886.47, stdev=57.64, samples=19 00:32:25.498 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:32:25.498 lat (msec) : 20=0.34%, 50=99.66% 00:32:25.498 cpu : usr=97.09%, sys=1.94%, ctx=138, majf=0, minf=20 00:32:25.498 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.498 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.498 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.498 filename2: (groupid=0, jobs=1): err= 0: pid=790207: Wed Oct 30 12:43:56 2024 00:32:25.498 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:32:25.498 slat (nsec): min=10754, max=90443, avg=35546.30, stdev=11143.05 00:32:25.498 clat (usec): min=27119, max=43635, avg=33496.84, stdev=1090.82 00:32:25.498 lat (usec): min=27158, max=43657, avg=33532.38, stdev=1090.86 00:32:25.499 clat percentiles (usec): 00:32:25.499 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:32:25.499 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.499 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.499 | 99.00th=[38536], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:32:25.499 | 99.99th=[43779] 00:32:25.499 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1886.32, stdev=57.91, samples=19 00:32:25.499 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:32:25.499 lat (msec) : 50=100.00% 00:32:25.499 cpu : usr=98.07%, sys=1.52%, ctx=16, majf=0, minf=28 00:32:25.499 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.499 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.499 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.499 filename2: (groupid=0, jobs=1): err= 0: pid=790208: Wed Oct 30 12:43:56 2024 00:32:25.499 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:32:25.499 slat (usec): min=8, max=113, avg=43.41, stdev=18.91 00:32:25.499 clat (usec): min=14779, max=69712, avg=33449.88, stdev=2146.59 00:32:25.499 lat (usec): min=14826, max=69751, avg=33493.28, stdev=2145.67 00:32:25.499 clat percentiles (usec): 00:32:25.499 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:32:25.499 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.499 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.499 | 99.00th=[41157], 99.50th=[43254], 99.90th=[57410], 99.95th=[57410], 00:32:25.499 | 99.99th=[69731] 00:32:25.499 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1886.32, stdev=71.93, samples=19 00:32:25.499 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:32:25.499 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:32:25.499 cpu : usr=97.97%, sys=1.41%, ctx=42, majf=0, minf=24 00:32:25.499 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:25.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.499 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.499 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.499 filename2: (groupid=0, jobs=1): err= 0: pid=790209: Wed Oct 30 12:43:56 2024 00:32:25.499 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10011msec) 00:32:25.499 slat (usec): min=6, max=158, avg=38.32, stdev=18.04 00:32:25.499 clat (usec): min=17046, max=47127, avg=33380.51, stdev=1617.99 00:32:25.499 lat (usec): min=17091, max=47161, avg=33418.82, stdev=1617.85 00:32:25.499 clat percentiles (usec): 00:32:25.499 | 1.00th=[24249], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:32:25.499 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:25.499 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.499 | 99.00th=[40109], 99.50th=[41157], 99.90th=[43779], 99.95th=[46924], 00:32:25.499 | 99.99th=[46924] 00:32:25.499 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1894.40, stdev=52.53, samples=20 00:32:25.499 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:32:25.499 lat (msec) : 20=0.08%, 50=99.92% 00:32:25.499 cpu : usr=97.89%, sys=1.40%, ctx=79, majf=0, minf=30 00:32:25.499 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:25.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.499 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.499 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.499 filename2: (groupid=0, jobs=1): err= 0: pid=790210: Wed Oct 30 12:43:56 2024 00:32:25.499 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:32:25.499 slat (usec): min=8, max=123, avg=38.86, stdev=13.77 00:32:25.499 clat (usec): min=14798, max=56049, avg=33462.59, stdev=2007.82 00:32:25.499 lat (usec): min=14844, max=56067, avg=33501.45, stdev=2007.72 00:32:25.499 clat percentiles (usec): 00:32:25.499 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:32:25.499 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:32:25.499 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.499 | 99.00th=[41157], 99.50th=[43254], 99.90th=[55837], 99.95th=[55837], 00:32:25.499 | 99.99th=[55837] 00:32:25.499 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1886.32, stdev=71.93, samples=19 00:32:25.499 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:32:25.499 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:32:25.499 cpu : usr=98.31%, sys=1.18%, ctx=53, majf=0, minf=30 00:32:25.499 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.499 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.499 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.499 filename2: (groupid=0, jobs=1): err= 0: pid=790211: Wed Oct 30 12:43:56 2024 00:32:25.499 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10003msec) 00:32:25.499 slat (nsec): min=7757, max=74268, avg=28505.07, stdev=12507.10 00:32:25.499 clat (usec): min=14059, max=43723, avg=33451.09, stdev=1826.04 00:32:25.499 lat (usec): min=14077, max=43752, avg=33479.60, stdev=1826.14 00:32:25.499 clat percentiles (usec): 00:32:25.499 | 1.00th=[22676], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:25.499 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:25.499 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:25.499 | 99.00th=[40633], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:32:25.499 | 99.99th=[43779] 00:32:25.499 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1899.79, stdev=47.95, samples=19 00:32:25.499 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:32:25.499 lat (msec) : 20=0.67%, 50=99.33% 00:32:25.499 cpu : usr=97.51%, sys=1.66%, ctx=84, majf=0, minf=27 00:32:25.499 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:25.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.499 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.499 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:25.499 00:32:25.499 Run status group 0 (all jobs): 00:32:25.499 READ: bw=44.5MiB/s (46.6MB/s), 1886KiB/s-1992KiB/s (1931kB/s-2040kB/s), io=446MiB (467MB), run=10001-10018msec 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.499 bdev_null0 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.499 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.500 [2024-10-30 12:43:56.978348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.500 bdev_null1 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.500 12:43:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:25.500 { 00:32:25.500 "params": { 00:32:25.500 "name": "Nvme$subsystem", 00:32:25.500 "trtype": "$TEST_TRANSPORT", 00:32:25.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.500 "adrfam": "ipv4", 00:32:25.500 "trsvcid": "$NVMF_PORT", 00:32:25.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.500 "hdgst": ${hdgst:-false}, 00:32:25.500 "ddgst": ${ddgst:-false} 00:32:25.500 }, 00:32:25.500 "method": "bdev_nvme_attach_controller" 00:32:25.500 } 00:32:25.500 EOF 00:32:25.500 )") 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:25.500 { 00:32:25.500 "params": { 00:32:25.500 "name": "Nvme$subsystem", 00:32:25.500 "trtype": "$TEST_TRANSPORT", 00:32:25.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.500 "adrfam": "ipv4", 00:32:25.500 "trsvcid": "$NVMF_PORT", 00:32:25.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.500 "hdgst": ${hdgst:-false}, 00:32:25.500 "ddgst": ${ddgst:-false} 00:32:25.500 }, 00:32:25.500 "method": "bdev_nvme_attach_controller" 00:32:25.500 } 00:32:25.500 EOF 00:32:25.500 )") 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:25.500 "params": { 00:32:25.500 "name": "Nvme0", 00:32:25.500 "trtype": "tcp", 00:32:25.500 "traddr": "10.0.0.2", 00:32:25.500 "adrfam": "ipv4", 00:32:25.500 "trsvcid": "4420", 00:32:25.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.500 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.500 "hdgst": false, 00:32:25.500 "ddgst": false 00:32:25.500 }, 00:32:25.500 "method": "bdev_nvme_attach_controller" 00:32:25.500 },{ 00:32:25.500 "params": { 00:32:25.500 "name": "Nvme1", 00:32:25.500 "trtype": "tcp", 00:32:25.500 "traddr": "10.0.0.2", 00:32:25.500 "adrfam": "ipv4", 00:32:25.500 "trsvcid": "4420", 00:32:25.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:25.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:25.500 "hdgst": false, 00:32:25.500 "ddgst": false 00:32:25.500 }, 00:32:25.500 "method": "bdev_nvme_attach_controller" 00:32:25.500 }' 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:25.500 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.500 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:25.500 ... 00:32:25.500 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:25.500 ... 00:32:25.500 fio-3.35 00:32:25.500 Starting 4 threads 00:32:30.758 00:32:30.758 filename0: (groupid=0, jobs=1): err= 0: pid=791589: Wed Oct 30 12:44:03 2024 00:32:30.758 read: IOPS=1869, BW=14.6MiB/s (15.3MB/s)(73.0MiB/5001msec) 00:32:30.758 slat (nsec): min=4019, max=44444, avg=14246.24, stdev=4517.76 00:32:30.758 clat (usec): min=792, max=7493, avg=4226.92, stdev=664.65 00:32:30.758 lat (usec): min=805, max=7507, avg=4241.17, stdev=663.88 00:32:30.758 clat percentiles (usec): 00:32:30.758 | 1.00th=[ 2704], 5.00th=[ 3621], 10.00th=[ 3884], 20.00th=[ 3982], 00:32:30.758 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:32:30.758 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4883], 95.00th=[ 5604], 00:32:30.758 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7439], 99.95th=[ 7439], 00:32:30.758 | 99.99th=[ 7504] 00:32:30.758 bw ( KiB/s): min=13632, max=15488, per=24.00%, avg=14887.11, stdev=561.43, samples=9 00:32:30.758 iops : min= 1704, max= 1936, avg=1860.89, stdev=70.18, samples=9 00:32:30.758 lat (usec) : 1000=0.12% 00:32:30.758 lat (msec) : 2=0.43%, 4=22.19%, 10=77.26% 00:32:30.758 cpu : usr=93.60%, sys=5.46%, ctx=37, majf=0, minf=75 00:32:30.758 IO depths : 1=0.3%, 2=14.0%, 4=58.6%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:30.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.758 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.758 issued rwts: total=9350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.758 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:30.758 filename0: (groupid=0, jobs=1): err= 0: pid=791590: Wed Oct 30 12:44:03 2024 00:32:30.758 read: IOPS=1991, BW=15.6MiB/s (16.3MB/s)(77.8MiB/5002msec) 00:32:30.758 slat (nsec): min=3882, max=37604, avg=11970.70, stdev=4199.16 00:32:30.758 clat (usec): min=1220, max=7293, avg=3976.14, stdev=483.15 00:32:30.758 lat (usec): min=1234, max=7307, avg=3988.11, stdev=483.43 00:32:30.758 clat percentiles (usec): 00:32:30.758 | 1.00th=[ 2671], 5.00th=[ 3163], 10.00th=[ 3392], 20.00th=[ 3687], 00:32:30.758 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:32:30.758 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4293], 95.00th=[ 4490], 00:32:30.758 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 7177], 99.95th=[ 7177], 00:32:30.758 | 99.99th=[ 7308] 00:32:30.758 bw ( KiB/s): min=15232, max=16704, per=25.68%, avg=15932.80, stdev=537.63, samples=10 00:32:30.758 iops : min= 1904, max= 2088, avg=1991.60, stdev=67.20, samples=10 00:32:30.758 lat (msec) : 2=0.15%, 4=36.04%, 10=63.81% 00:32:30.758 cpu : usr=94.12%, sys=5.22%, ctx=48, majf=0, minf=96 00:32:30.758 IO depths : 1=0.2%, 2=12.1%, 4=59.9%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:30.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.758 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.758 issued rwts: total=9962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.758 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:30.759 filename1: (groupid=0, jobs=1): err= 0: pid=791591: Wed Oct 30 12:44:03 2024 00:32:30.759 read: IOPS=1956, BW=15.3MiB/s (16.0MB/s)(76.5MiB/5004msec) 00:32:30.759 slat (nsec): min=3779, max=44676, avg=13896.26, stdev=4222.90 00:32:30.759 clat (usec): min=791, max=7329, avg=4038.57, stdev=509.89 00:32:30.759 lat (usec): min=806, max=7349, avg=4052.47, stdev=509.95 00:32:30.759 clat percentiles (usec): 00:32:30.759 | 1.00th=[ 2704], 5.00th=[ 3294], 10.00th=[ 3458], 20.00th=[ 3818], 00:32:30.759 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:32:30.759 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4424], 95.00th=[ 4883], 00:32:30.759 | 99.00th=[ 5866], 99.50th=[ 6390], 99.90th=[ 7111], 99.95th=[ 7177], 00:32:30.759 | 99.99th=[ 7308] 00:32:30.759 bw ( KiB/s): min=14752, max=17136, per=25.22%, avg=15649.60, stdev=688.04, samples=10 00:32:30.759 iops : min= 1844, max= 2142, avg=1956.20, stdev=86.00, samples=10 00:32:30.759 lat (usec) : 1000=0.03% 00:32:30.759 lat (msec) : 2=0.23%, 4=33.61%, 10=66.13% 00:32:30.759 cpu : usr=93.82%, sys=5.32%, ctx=68, majf=0, minf=39 00:32:30.759 IO depths : 1=0.2%, 2=16.4%, 4=56.4%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:30.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.759 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.759 issued rwts: total=9789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.759 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:30.759 filename1: (groupid=0, jobs=1): err= 0: pid=791592: Wed Oct 30 12:44:03 2024 00:32:30.759 read: IOPS=1940, BW=15.2MiB/s (15.9MB/s)(75.8MiB/5001msec) 00:32:30.759 slat (nsec): min=3739, max=42827, avg=14426.75, stdev=4592.78 00:32:30.759 clat (usec): min=786, max=10137, avg=4067.00, stdev=537.92 00:32:30.759 lat (usec): min=800, max=10148, avg=4081.43, stdev=537.83 00:32:30.759 clat percentiles (usec): 00:32:30.759 | 1.00th=[ 2671], 5.00th=[ 3359], 10.00th=[ 3556], 20.00th=[ 3884], 00:32:30.759 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:32:30.759 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4490], 95.00th=[ 4948], 00:32:30.759 | 99.00th=[ 5997], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[ 7439], 00:32:30.759 | 99.99th=[10159] 00:32:30.759 bw ( KiB/s): min=15088, max=16256, per=25.01%, avg=15519.80, stdev=343.26, samples=10 00:32:30.759 iops : min= 1886, max= 2032, avg=1939.90, stdev=42.97, samples=10 00:32:30.759 lat (usec) : 1000=0.05% 00:32:30.759 lat (msec) : 2=0.58%, 4=32.77%, 10=66.59%, 20=0.01% 00:32:30.759 cpu : usr=91.96%, sys=6.24%, ctx=214, majf=0, minf=103 00:32:30.759 IO depths : 1=0.4%, 2=18.0%, 4=55.7%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:30.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.759 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:30.759 issued rwts: total=9705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:30.759 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:30.759 00:32:30.759 Run status group 0 (all jobs): 00:32:30.759 READ: bw=60.6MiB/s (63.5MB/s), 14.6MiB/s-15.6MiB/s (15.3MB/s-16.3MB/s), io=303MiB (318MB), run=5001-5004msec 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.759 00:32:30.759 real 0m24.483s 00:32:30.759 user 4m32.002s 00:32:30.759 sys 0m6.964s 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:30.759 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:30.759 ************************************ 00:32:30.759 END TEST fio_dif_rand_params 00:32:30.759 ************************************ 00:32:30.759 12:44:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:30.759 12:44:03 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:30.759 12:44:03 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:30.759 12:44:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:30.759 ************************************ 00:32:30.759 START TEST fio_dif_digest 00:32:30.759 ************************************ 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:30.759 bdev_null0 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.759 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:31.018 [2024-10-30 12:44:03.460056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:31.018 { 00:32:31.018 "params": { 00:32:31.018 "name": "Nvme$subsystem", 00:32:31.018 "trtype": "$TEST_TRANSPORT", 00:32:31.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.018 "adrfam": "ipv4", 00:32:31.018 "trsvcid": "$NVMF_PORT", 00:32:31.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.018 "hdgst": ${hdgst:-false}, 00:32:31.018 "ddgst": ${ddgst:-false} 00:32:31.018 }, 00:32:31.018 "method": "bdev_nvme_attach_controller" 00:32:31.018 } 00:32:31.018 EOF 00:32:31.018 )") 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:31.018 "params": { 00:32:31.018 "name": "Nvme0", 00:32:31.018 "trtype": "tcp", 00:32:31.018 "traddr": "10.0.0.2", 00:32:31.018 "adrfam": "ipv4", 00:32:31.018 "trsvcid": "4420", 00:32:31.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.018 "hdgst": true, 00:32:31.018 "ddgst": true 00:32:31.018 }, 00:32:31.018 "method": "bdev_nvme_attach_controller" 00:32:31.018 }' 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:31.018 12:44:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.277 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:31.277 ... 00:32:31.277 fio-3.35 00:32:31.277 Starting 3 threads 00:32:43.469 00:32:43.469 filename0: (groupid=0, jobs=1): err= 0: pid=792342: Wed Oct 30 12:44:14 2024 00:32:43.469 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(271MiB/10047msec) 00:32:43.469 slat (nsec): min=4246, max=37606, avg=14858.61, stdev=2465.90 00:32:43.469 clat (usec): min=9497, max=53102, avg=13852.34, stdev=1493.40 00:32:43.469 lat (usec): min=9512, max=53117, avg=13867.19, stdev=1493.38 00:32:43.469 clat percentiles (usec): 00:32:43.469 | 1.00th=[11338], 5.00th=[12387], 10.00th=[12649], 20.00th=[13042], 00:32:43.469 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:32:43.469 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15008], 95.00th=[15270], 00:32:43.469 | 99.00th=[16057], 99.50th=[16581], 99.90th=[22152], 99.95th=[49546], 00:32:43.469 | 99.99th=[53216] 00:32:43.469 bw ( KiB/s): min=26880, max=28416, per=34.56%, avg=27737.60, stdev=433.77, samples=20 00:32:43.469 iops : min= 210, max= 222, avg=216.70, stdev= 3.39, samples=20 00:32:43.469 lat (msec) : 10=0.18%, 20=99.59%, 50=0.18%, 100=0.05% 00:32:43.469 cpu : usr=90.77%, sys=7.45%, ctx=454, majf=0, minf=129 00:32:43.469 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:43.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.469 issued rwts: total=2170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.469 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:43.469 filename0: (groupid=0, jobs=1): err= 0: pid=792343: Wed Oct 30 12:44:14 2024 00:32:43.469 read: IOPS=207, BW=26.0MiB/s (27.2MB/s)(261MiB/10046msec) 00:32:43.469 slat (nsec): min=4294, max=29846, avg=15342.41, stdev=2264.53 00:32:43.469 clat (usec): min=11380, max=52180, avg=14393.69, stdev=2036.77 00:32:43.469 lat (usec): min=11397, max=52197, avg=14409.03, stdev=2036.77 00:32:43.469 clat percentiles (usec): 00:32:43.469 | 1.00th=[12125], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:32:43.469 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:32:43.469 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15795], 00:32:43.469 | 99.00th=[16712], 99.50th=[16909], 99.90th=[52167], 99.95th=[52167], 00:32:43.469 | 99.99th=[52167] 00:32:43.469 bw ( KiB/s): min=24576, max=27392, per=33.27%, avg=26700.80, stdev=599.51, samples=20 00:32:43.469 iops : min= 192, max= 214, avg=208.60, stdev= 4.68, samples=20 00:32:43.469 lat (msec) : 20=99.62%, 50=0.19%, 100=0.19% 00:32:43.469 cpu : usr=93.43%, sys=6.03%, ctx=19, majf=0, minf=125 00:32:43.469 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:43.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.469 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.469 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:43.469 filename0: (groupid=0, jobs=1): err= 0: pid=792344: Wed Oct 30 12:44:14 2024 00:32:43.469 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(255MiB/10045msec) 00:32:43.469 slat (nsec): min=4321, max=29520, avg=13697.83, stdev=1319.92 00:32:43.469 clat (usec): min=8568, max=53272, avg=14719.23, stdev=1525.86 00:32:43.469 lat (usec): min=8582, max=53285, avg=14732.93, stdev=1525.85 00:32:43.469 clat percentiles (usec): 00:32:43.469 | 1.00th=[12387], 5.00th=[13304], 10.00th=[13566], 20.00th=[13960], 00:32:43.469 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:32:43.469 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16188], 00:32:43.469 | 99.00th=[16909], 99.50th=[17171], 99.90th=[23200], 99.95th=[49021], 00:32:43.469 | 99.99th=[53216] 00:32:43.469 bw ( KiB/s): min=25600, max=26624, per=32.53%, avg=26112.00, stdev=275.47, samples=20 00:32:43.469 iops : min= 200, max= 208, avg=204.00, stdev= 2.15, samples=20 00:32:43.469 lat (msec) : 10=0.54%, 20=99.22%, 50=0.20%, 100=0.05% 00:32:43.469 cpu : usr=93.94%, sys=5.58%, ctx=15, majf=0, minf=148 00:32:43.469 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:43.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.469 issued rwts: total=2042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.469 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:43.469 00:32:43.469 Run status group 0 (all jobs): 00:32:43.469 READ: bw=78.4MiB/s (82.2MB/s), 25.4MiB/s-27.0MiB/s (26.6MB/s-28.3MB/s), io=788MiB (826MB), run=10045-10047msec 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.469 00:32:43.469 real 0m11.066s 00:32:43.469 user 0m29.128s 00:32:43.469 sys 0m2.170s 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:43.469 12:44:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:43.469 ************************************ 00:32:43.469 END TEST fio_dif_digest 00:32:43.469 ************************************ 00:32:43.469 12:44:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:43.469 12:44:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:43.469 rmmod nvme_tcp 00:32:43.469 rmmod nvme_fabrics 00:32:43.469 rmmod nvme_keyring 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 786295 ']' 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 786295 00:32:43.469 12:44:14 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 786295 ']' 00:32:43.469 12:44:14 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 786295 00:32:43.469 12:44:14 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:32:43.469 12:44:14 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:43.469 12:44:14 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 786295 00:32:43.469 12:44:14 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:43.469 12:44:14 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:43.469 12:44:14 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 786295' 00:32:43.469 killing process with pid 786295 00:32:43.469 12:44:14 nvmf_dif -- common/autotest_common.sh@971 -- # kill 786295 00:32:43.469 12:44:14 nvmf_dif -- common/autotest_common.sh@976 -- # wait 786295 00:32:43.469 12:44:14 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:43.470 12:44:14 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:43.470 Waiting for block devices as requested 00:32:43.470 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:43.470 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:43.728 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:43.728 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:43.728 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:43.985 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:43.985 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:43.985 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:43.985 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:44.244 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:44.244 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:44.244 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:44.244 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:44.503 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:44.503 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:44.503 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:44.503 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:44.761 12:44:17 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:44.761 12:44:17 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:44.761 12:44:17 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:44.761 12:44:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:32:44.761 12:44:17 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:44.761 12:44:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:32:44.761 12:44:17 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:44.761 12:44:17 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:44.761 12:44:17 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.761 12:44:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:44.761 12:44:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.668 12:44:19 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:46.927 00:32:46.927 real 1m7.112s 00:32:46.927 user 6m28.943s 00:32:46.927 sys 0m18.447s 00:32:46.927 12:44:19 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:46.927 12:44:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:46.927 ************************************ 00:32:46.927 END TEST nvmf_dif 00:32:46.927 ************************************ 00:32:46.927 12:44:19 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:46.927 12:44:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:46.927 12:44:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:46.927 12:44:19 -- common/autotest_common.sh@10 -- # set +x 00:32:46.927 ************************************ 00:32:46.927 START TEST nvmf_abort_qd_sizes 00:32:46.927 ************************************ 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:46.927 * Looking for test storage... 00:32:46.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.927 --rc genhtml_branch_coverage=1 00:32:46.927 --rc genhtml_function_coverage=1 00:32:46.927 --rc genhtml_legend=1 00:32:46.927 --rc geninfo_all_blocks=1 00:32:46.927 --rc geninfo_unexecuted_blocks=1 00:32:46.927 00:32:46.927 ' 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.927 --rc genhtml_branch_coverage=1 00:32:46.927 --rc genhtml_function_coverage=1 00:32:46.927 --rc genhtml_legend=1 00:32:46.927 --rc geninfo_all_blocks=1 00:32:46.927 --rc geninfo_unexecuted_blocks=1 00:32:46.927 00:32:46.927 ' 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.927 --rc genhtml_branch_coverage=1 00:32:46.927 --rc genhtml_function_coverage=1 00:32:46.927 --rc genhtml_legend=1 00:32:46.927 --rc geninfo_all_blocks=1 00:32:46.927 --rc geninfo_unexecuted_blocks=1 00:32:46.927 00:32:46.927 ' 00:32:46.927 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.927 --rc genhtml_branch_coverage=1 00:32:46.928 --rc genhtml_function_coverage=1 00:32:46.928 --rc genhtml_legend=1 00:32:46.928 --rc geninfo_all_blocks=1 00:32:46.928 --rc geninfo_unexecuted_blocks=1 00:32:46.928 00:32:46.928 ' 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:46.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:32:46.928 12:44:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:49.462 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:49.462 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:49.462 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:49.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:49.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:32:49.462 00:32:49.462 --- 10.0.0.2 ping statistics --- 00:32:49.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.462 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:32:49.462 00:32:49.462 --- 10.0.0.1 ping statistics --- 00:32:49.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.462 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:49.462 12:44:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:50.434 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:50.434 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:50.434 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:50.434 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:50.434 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:50.693 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:50.693 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:50.693 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:50.693 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:50.693 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:50.693 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:50.693 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:50.693 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:50.693 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:50.693 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:50.693 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:51.630 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=797264 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 797264 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 797264 ']' 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:51.630 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:51.945 [2024-10-30 12:44:24.338277] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:32:51.945 [2024-10-30 12:44:24.338348] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.945 [2024-10-30 12:44:24.411239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:51.945 [2024-10-30 12:44:24.469882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.945 [2024-10-30 12:44:24.469933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.945 [2024-10-30 12:44:24.469953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.945 [2024-10-30 12:44:24.469970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.945 [2024-10-30 12:44:24.469985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.945 [2024-10-30 12:44:24.471455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.945 [2024-10-30 12:44:24.471479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:51.945 [2024-10-30 12:44:24.471534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:51.945 [2024-10-30 12:44:24.471538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.945 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:51.945 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:32:51.945 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:51.945 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.945 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:52.230 12:44:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:52.230 ************************************ 00:32:52.230 START TEST spdk_target_abort 00:32:52.230 ************************************ 00:32:52.230 12:44:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:32:52.230 12:44:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:52.230 12:44:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:52.230 12:44:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.230 12:44:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:55.511 spdk_targetn1 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:55.512 [2024-10-30 12:44:27.479084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:55.512 [2024-10-30 12:44:27.528417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:55.512 12:44:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:58.038 Initializing NVMe Controllers 00:32:58.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:58.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:58.038 Initialization complete. Launching workers. 00:32:58.038 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13048, failed: 0 00:32:58.038 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 11829 00:32:58.038 success 726, unsuccessful 493, failed 0 00:32:58.038 12:44:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:58.038 12:44:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:02.219 Initializing NVMe Controllers 00:33:02.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:02.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:02.219 Initialization complete. Launching workers. 00:33:02.219 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8601, failed: 0 00:33:02.219 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1252, failed to submit 7349 00:33:02.219 success 325, unsuccessful 927, failed 0 00:33:02.219 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:02.220 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:04.748 Initializing NVMe Controllers 00:33:04.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:04.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:04.748 Initialization complete. Launching workers. 00:33:04.748 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30986, failed: 0 00:33:04.748 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2742, failed to submit 28244 00:33:04.748 success 495, unsuccessful 2247, failed 0 00:33:04.748 12:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:04.748 12:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.748 12:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:04.748 12:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.748 12:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:04.748 12:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.748 12:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 797264 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 797264 ']' 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 797264 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 797264 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 797264' 00:33:06.118 killing process with pid 797264 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 797264 00:33:06.118 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 797264 00:33:06.376 00:33:06.376 real 0m14.227s 00:33:06.376 user 0m53.905s 00:33:06.376 sys 0m2.597s 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:06.376 ************************************ 00:33:06.376 END TEST spdk_target_abort 00:33:06.376 ************************************ 00:33:06.376 12:44:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:06.376 12:44:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:06.376 12:44:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:06.376 12:44:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:06.376 ************************************ 00:33:06.376 START TEST kernel_target_abort 00:33:06.376 ************************************ 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:06.376 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:06.377 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:33:06.377 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:06.377 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:06.377 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:06.377 12:44:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:07.752 Waiting for block devices as requested 00:33:07.752 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:07.752 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:07.752 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:08.011 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:08.011 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:08.011 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:08.011 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:08.270 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:08.270 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:08.270 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:08.270 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:08.528 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:08.528 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:08.528 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:08.528 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:08.786 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:08.786 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:09.045 No valid GPT data, bailing 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:09.045 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:09.045 00:33:09.045 Discovery Log Number of Records 2, Generation counter 2 00:33:09.045 =====Discovery Log Entry 0====== 00:33:09.045 trtype: tcp 00:33:09.045 adrfam: ipv4 00:33:09.045 subtype: current discovery subsystem 00:33:09.045 treq: not specified, sq flow control disable supported 00:33:09.045 portid: 1 00:33:09.045 trsvcid: 4420 00:33:09.045 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:09.045 traddr: 10.0.0.1 00:33:09.045 eflags: none 00:33:09.045 sectype: none 00:33:09.045 =====Discovery Log Entry 1====== 00:33:09.045 trtype: tcp 00:33:09.045 adrfam: ipv4 00:33:09.045 subtype: nvme subsystem 00:33:09.045 treq: not specified, sq flow control disable supported 00:33:09.045 portid: 1 00:33:09.045 trsvcid: 4420 00:33:09.045 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:09.045 traddr: 10.0.0.1 00:33:09.045 eflags: none 00:33:09.045 sectype: none 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:09.046 12:44:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:12.325 Initializing NVMe Controllers 00:33:12.325 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:12.325 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:12.325 Initialization complete. Launching workers. 00:33:12.325 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56556, failed: 0 00:33:12.325 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56556, failed to submit 0 00:33:12.325 success 0, unsuccessful 56556, failed 0 00:33:12.325 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:12.325 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:15.607 Initializing NVMe Controllers 00:33:15.607 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:15.607 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:15.607 Initialization complete. Launching workers. 00:33:15.607 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99889, failed: 0 00:33:15.607 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25174, failed to submit 74715 00:33:15.608 success 0, unsuccessful 25174, failed 0 00:33:15.608 12:44:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:15.608 12:44:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:18.886 Initializing NVMe Controllers 00:33:18.886 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:18.886 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:18.886 Initialization complete. Launching workers. 00:33:18.886 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96822, failed: 0 00:33:18.886 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24178, failed to submit 72644 00:33:18.886 success 0, unsuccessful 24178, failed 0 00:33:18.886 12:44:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:18.886 12:44:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:18.886 12:44:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:33:18.886 12:44:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:18.886 12:44:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:18.886 12:44:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:18.886 12:44:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:18.886 12:44:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:18.886 12:44:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:18.886 12:44:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:19.823 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:19.823 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:19.823 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:19.823 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:19.823 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:19.823 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:19.823 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:19.823 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:19.823 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:19.823 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:19.823 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:19.823 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:19.823 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:19.823 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:19.823 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:19.823 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:20.760 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:20.760 00:33:20.760 real 0m14.475s 00:33:20.760 user 0m6.769s 00:33:20.760 sys 0m3.224s 00:33:20.760 12:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:20.760 12:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:20.760 ************************************ 00:33:20.760 END TEST kernel_target_abort 00:33:20.760 ************************************ 00:33:20.760 12:44:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:20.760 12:44:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:20.760 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.760 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:20.760 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.760 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:20.760 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.761 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.761 rmmod nvme_tcp 00:33:20.761 rmmod nvme_fabrics 00:33:20.761 rmmod nvme_keyring 00:33:21.018 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.018 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:21.018 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:21.018 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 797264 ']' 00:33:21.018 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 797264 00:33:21.018 12:44:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 797264 ']' 00:33:21.018 12:44:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 797264 00:33:21.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (797264) - No such process 00:33:21.018 12:44:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 797264 is not found' 00:33:21.018 Process with pid 797264 is not found 00:33:21.018 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:21.018 12:44:53 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:21.955 Waiting for block devices as requested 00:33:22.214 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:22.214 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:22.474 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:22.474 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:22.474 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:22.474 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:22.734 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:22.734 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:22.734 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:22.734 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:22.993 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:22.993 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:22.993 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:22.993 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:23.252 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:23.252 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:23.252 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:23.252 12:44:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.784 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:25.784 00:33:25.784 real 0m38.571s 00:33:25.784 user 1m2.930s 00:33:25.784 sys 0m9.533s 00:33:25.784 12:44:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:25.784 12:44:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:25.784 ************************************ 00:33:25.784 END TEST nvmf_abort_qd_sizes 00:33:25.784 ************************************ 00:33:25.784 12:44:57 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:25.784 12:44:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:25.784 12:44:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:25.784 12:44:57 -- common/autotest_common.sh@10 -- # set +x 00:33:25.784 ************************************ 00:33:25.784 START TEST keyring_file 00:33:25.784 ************************************ 00:33:25.784 12:44:58 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:25.784 * Looking for test storage... 00:33:25.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:25.785 12:44:58 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:25.785 12:44:58 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:33:25.785 12:44:58 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:25.785 12:44:58 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@345 -- # : 1 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@353 -- # local d=1 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@355 -- # echo 1 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@353 -- # local d=2 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@355 -- # echo 2 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@368 -- # return 0 00:33:25.785 12:44:58 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.785 12:44:58 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:25.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.785 --rc genhtml_branch_coverage=1 00:33:25.785 --rc genhtml_function_coverage=1 00:33:25.785 --rc genhtml_legend=1 00:33:25.785 --rc geninfo_all_blocks=1 00:33:25.785 --rc geninfo_unexecuted_blocks=1 00:33:25.785 00:33:25.785 ' 00:33:25.785 12:44:58 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:25.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.785 --rc genhtml_branch_coverage=1 00:33:25.785 --rc genhtml_function_coverage=1 00:33:25.785 --rc genhtml_legend=1 00:33:25.785 --rc geninfo_all_blocks=1 00:33:25.785 --rc geninfo_unexecuted_blocks=1 00:33:25.785 00:33:25.785 ' 00:33:25.785 12:44:58 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:25.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.785 --rc genhtml_branch_coverage=1 00:33:25.785 --rc genhtml_function_coverage=1 00:33:25.785 --rc genhtml_legend=1 00:33:25.785 --rc geninfo_all_blocks=1 00:33:25.785 --rc geninfo_unexecuted_blocks=1 00:33:25.785 00:33:25.785 ' 00:33:25.785 12:44:58 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:25.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.785 --rc genhtml_branch_coverage=1 00:33:25.785 --rc genhtml_function_coverage=1 00:33:25.785 --rc genhtml_legend=1 00:33:25.785 --rc geninfo_all_blocks=1 00:33:25.785 --rc geninfo_unexecuted_blocks=1 00:33:25.785 00:33:25.785 ' 00:33:25.785 12:44:58 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.785 12:44:58 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.785 12:44:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.785 12:44:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.785 12:44:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.785 12:44:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:25.785 12:44:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@51 -- # : 0 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:25.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:25.785 12:44:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:25.785 12:44:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:25.785 12:44:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:25.785 12:44:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:25.785 12:44:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:25.785 12:44:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.El8MHNe3uT 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:25.785 12:44:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.El8MHNe3uT 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.El8MHNe3uT 00:33:25.785 12:44:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.El8MHNe3uT 00:33:25.785 12:44:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:25.785 12:44:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ElgnHceCBg 00:33:25.786 12:44:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:25.786 12:44:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:25.786 12:44:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:25.786 12:44:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:25.786 12:44:58 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:25.786 12:44:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:25.786 12:44:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:25.786 12:44:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ElgnHceCBg 00:33:25.786 12:44:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ElgnHceCBg 00:33:25.786 12:44:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ElgnHceCBg 00:33:25.786 12:44:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=803033 00:33:25.786 12:44:58 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:25.786 12:44:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 803033 00:33:25.786 12:44:58 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 803033 ']' 00:33:25.786 12:44:58 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.786 12:44:58 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:25.786 12:44:58 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.786 12:44:58 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:25.786 12:44:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:25.786 [2024-10-30 12:44:58.337808] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:33:25.786 [2024-10-30 12:44:58.337914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803033 ] 00:33:25.786 [2024-10-30 12:44:58.403698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.786 [2024-10-30 12:44:58.462750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.044 12:44:58 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:26.044 12:44:58 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:33:26.044 12:44:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:26.044 12:44:58 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.044 12:44:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:26.302 [2024-10-30 12:44:58.732130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.302 null0 00:33:26.302 [2024-10-30 12:44:58.764195] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:26.302 [2024-10-30 12:44:58.764724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.302 12:44:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:26.302 [2024-10-30 12:44:58.788250] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:26.302 request: 00:33:26.302 { 00:33:26.302 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.302 "secure_channel": false, 00:33:26.302 "listen_address": { 00:33:26.302 "trtype": "tcp", 00:33:26.302 "traddr": "127.0.0.1", 00:33:26.302 "trsvcid": "4420" 00:33:26.302 }, 00:33:26.302 "method": "nvmf_subsystem_add_listener", 00:33:26.302 "req_id": 1 00:33:26.302 } 00:33:26.302 Got JSON-RPC error response 00:33:26.302 response: 00:33:26.302 { 00:33:26.302 "code": -32602, 00:33:26.302 "message": "Invalid parameters" 00:33:26.302 } 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:26.302 12:44:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:26.302 12:44:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=803042 00:33:26.302 12:44:58 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:26.303 12:44:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 803042 /var/tmp/bperf.sock 00:33:26.303 12:44:58 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 803042 ']' 00:33:26.303 12:44:58 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.303 12:44:58 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:26.303 12:44:58 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.303 12:44:58 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:26.303 12:44:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:26.303 [2024-10-30 12:44:58.836724] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:33:26.303 [2024-10-30 12:44:58.836787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803042 ] 00:33:26.303 [2024-10-30 12:44:58.899789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.303 [2024-10-30 12:44:58.955760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.561 12:44:59 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:26.561 12:44:59 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:33:26.561 12:44:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.El8MHNe3uT 00:33:26.561 12:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.El8MHNe3uT 00:33:26.819 12:44:59 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ElgnHceCBg 00:33:26.819 12:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ElgnHceCBg 00:33:27.077 12:44:59 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:33:27.077 12:44:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:27.077 12:44:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:27.077 12:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.077 12:44:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:27.334 12:44:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.El8MHNe3uT == \/\t\m\p\/\t\m\p\.\E\l\8\M\H\N\e\3\u\T ]] 00:33:27.334 12:44:59 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:33:27.334 12:44:59 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:33:27.334 12:44:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:27.334 12:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.334 12:44:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:27.592 12:45:00 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ElgnHceCBg == \/\t\m\p\/\t\m\p\.\E\l\g\n\H\c\e\C\B\g ]] 00:33:27.592 12:45:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:33:27.592 12:45:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:27.592 12:45:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:27.592 12:45:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:27.592 12:45:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:27.592 12:45:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.849 12:45:00 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:27.849 12:45:00 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:33:27.849 12:45:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:27.849 12:45:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:27.849 12:45:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:27.849 12:45:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:27.849 12:45:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.106 12:45:00 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:33:28.106 12:45:00 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:28.106 12:45:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:28.363 [2024-10-30 12:45:01.002101] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:28.621 nvme0n1 00:33:28.621 12:45:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:33:28.621 12:45:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:28.621 12:45:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:28.621 12:45:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.621 12:45:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.621 12:45:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:28.879 12:45:01 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:33:28.879 12:45:01 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:33:28.879 12:45:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:28.879 12:45:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:28.879 12:45:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.879 12:45:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.879 12:45:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:29.136 12:45:01 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:33:29.136 12:45:01 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:29.136 Running I/O for 1 seconds... 00:33:30.510 10396.00 IOPS, 40.61 MiB/s 00:33:30.510 Latency(us) 00:33:30.510 [2024-10-30T11:45:03.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.510 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:30.510 nvme0n1 : 1.01 10447.35 40.81 0.00 0.00 12211.26 7281.78 26602.76 00:33:30.510 [2024-10-30T11:45:03.191Z] =================================================================================================================== 00:33:30.510 [2024-10-30T11:45:03.191Z] Total : 10447.35 40.81 0.00 0.00 12211.26 7281.78 26602.76 00:33:30.510 { 00:33:30.510 "results": [ 00:33:30.510 { 00:33:30.510 "job": "nvme0n1", 00:33:30.510 "core_mask": "0x2", 00:33:30.510 "workload": "randrw", 00:33:30.510 "percentage": 50, 00:33:30.510 "status": "finished", 00:33:30.510 "queue_depth": 128, 00:33:30.510 "io_size": 4096, 00:33:30.510 "runtime": 1.007337, 00:33:30.510 "iops": 10447.347809124454, 00:33:30.510 "mibps": 40.8099523793924, 00:33:30.510 "io_failed": 0, 00:33:30.510 "io_timeout": 0, 00:33:30.510 "avg_latency_us": 12211.259029519828, 00:33:30.510 "min_latency_us": 7281.777777777777, 00:33:30.510 "max_latency_us": 26602.76148148148 00:33:30.510 } 00:33:30.510 ], 00:33:30.510 "core_count": 1 00:33:30.510 } 00:33:30.510 12:45:02 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:30.510 12:45:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:30.510 12:45:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:33:30.510 12:45:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:30.510 12:45:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.510 12:45:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.510 12:45:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.510 12:45:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:30.768 12:45:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:30.768 12:45:03 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:33:30.768 12:45:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:30.768 12:45:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.768 12:45:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.768 12:45:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.768 12:45:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:31.025 12:45:03 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:31.025 12:45:03 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.025 12:45:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:31.026 12:45:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.026 12:45:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:31.026 12:45:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:31.026 12:45:03 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:31.026 12:45:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:31.026 12:45:03 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.026 12:45:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.284 [2024-10-30 12:45:03.924267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:31.284 [2024-10-30 12:45:03.924903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cf2f0 (107): Transport endpoint is not connected 00:33:31.284 [2024-10-30 12:45:03.925894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cf2f0 (9): Bad file descriptor 00:33:31.284 [2024-10-30 12:45:03.926893] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:31.284 [2024-10-30 12:45:03.926916] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:31.284 [2024-10-30 12:45:03.926930] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:31.284 [2024-10-30 12:45:03.926943] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:31.284 request: 00:33:31.284 { 00:33:31.284 "name": "nvme0", 00:33:31.284 "trtype": "tcp", 00:33:31.284 "traddr": "127.0.0.1", 00:33:31.284 "adrfam": "ipv4", 00:33:31.284 "trsvcid": "4420", 00:33:31.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:31.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:31.284 "prchk_reftag": false, 00:33:31.284 "prchk_guard": false, 00:33:31.284 "hdgst": false, 00:33:31.284 "ddgst": false, 00:33:31.284 "psk": "key1", 00:33:31.284 "allow_unrecognized_csi": false, 00:33:31.284 "method": "bdev_nvme_attach_controller", 00:33:31.284 "req_id": 1 00:33:31.284 } 00:33:31.284 Got JSON-RPC error response 00:33:31.284 response: 00:33:31.284 { 00:33:31.284 "code": -5, 00:33:31.284 "message": "Input/output error" 00:33:31.284 } 00:33:31.284 12:45:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:31.284 12:45:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:31.284 12:45:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:31.284 12:45:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:31.284 12:45:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:31.284 12:45:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:31.284 12:45:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:31.284 12:45:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:31.284 12:45:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:31.284 12:45:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.542 12:45:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:31.542 12:45:04 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:31.542 12:45:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:31.542 12:45:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:31.542 12:45:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:31.542 12:45:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.542 12:45:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:32.138 12:45:04 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:32.138 12:45:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:32.138 12:45:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:32.138 12:45:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:32.138 12:45:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:32.421 12:45:05 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:32.421 12:45:05 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:32.421 12:45:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.678 12:45:05 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:32.678 12:45:05 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.El8MHNe3uT 00:33:32.678 12:45:05 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.El8MHNe3uT 00:33:32.678 12:45:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:32.678 12:45:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.El8MHNe3uT 00:33:32.678 12:45:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:32.678 12:45:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.678 12:45:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:32.678 12:45:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.678 12:45:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.El8MHNe3uT 00:33:32.678 12:45:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.El8MHNe3uT 00:33:32.936 [2024-10-30 12:45:05.581219] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.El8MHNe3uT': 0100660 00:33:32.936 [2024-10-30 12:45:05.581277] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:32.936 request: 00:33:32.936 { 00:33:32.936 "name": "key0", 00:33:32.936 "path": "/tmp/tmp.El8MHNe3uT", 00:33:32.936 "method": "keyring_file_add_key", 00:33:32.936 "req_id": 1 00:33:32.936 } 00:33:32.936 Got JSON-RPC error response 00:33:32.936 response: 00:33:32.936 { 00:33:32.936 "code": -1, 00:33:32.936 "message": "Operation not permitted" 00:33:32.936 } 00:33:32.936 12:45:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:32.936 12:45:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:32.936 12:45:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:32.936 12:45:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:32.936 12:45:05 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.El8MHNe3uT 00:33:32.936 12:45:05 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.El8MHNe3uT 00:33:32.936 12:45:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.El8MHNe3uT 00:33:33.193 12:45:05 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.El8MHNe3uT 00:33:33.193 12:45:05 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:33.193 12:45:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:33.193 12:45:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.193 12:45:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.193 12:45:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.193 12:45:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.759 12:45:06 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:33.759 12:45:06 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.759 12:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.759 [2024-10-30 12:45:06.403459] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.El8MHNe3uT': No such file or directory 00:33:33.759 [2024-10-30 12:45:06.403505] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:33.759 [2024-10-30 12:45:06.403529] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:33.759 [2024-10-30 12:45:06.403542] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:33.759 [2024-10-30 12:45:06.403570] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:33.759 [2024-10-30 12:45:06.403581] bdev_nvme.c:6577:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:33.759 request: 00:33:33.759 { 00:33:33.759 "name": "nvme0", 00:33:33.759 "trtype": "tcp", 00:33:33.759 "traddr": "127.0.0.1", 00:33:33.759 "adrfam": "ipv4", 00:33:33.759 "trsvcid": "4420", 00:33:33.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:33.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:33.759 "prchk_reftag": false, 00:33:33.759 "prchk_guard": false, 00:33:33.759 "hdgst": false, 00:33:33.759 "ddgst": false, 00:33:33.759 "psk": "key0", 00:33:33.759 "allow_unrecognized_csi": false, 00:33:33.759 "method": "bdev_nvme_attach_controller", 00:33:33.759 "req_id": 1 00:33:33.759 } 00:33:33.759 Got JSON-RPC error response 00:33:33.759 response: 00:33:33.759 { 00:33:33.759 "code": -19, 00:33:33.759 "message": "No such device" 00:33:33.759 } 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:33.759 12:45:06 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:33.759 12:45:06 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:33.759 12:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:34.016 12:45:06 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:34.016 12:45:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:34.016 12:45:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:34.016 12:45:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:34.016 12:45:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:34.016 12:45:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:34.016 12:45:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nsopyW1BiX 00:33:34.016 12:45:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:34.016 12:45:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:34.016 12:45:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:34.016 12:45:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:34.016 12:45:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:34.016 12:45:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:34.016 12:45:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:34.274 12:45:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nsopyW1BiX 00:33:34.274 12:45:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nsopyW1BiX 00:33:34.274 12:45:06 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.nsopyW1BiX 00:33:34.274 12:45:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nsopyW1BiX 00:33:34.274 12:45:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nsopyW1BiX 00:33:34.531 12:45:07 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.531 12:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.789 nvme0n1 00:33:34.789 12:45:07 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:33:34.789 12:45:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:34.789 12:45:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:34.789 12:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:34.789 12:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.789 12:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:35.048 12:45:07 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:33:35.048 12:45:07 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:33:35.048 12:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:35.306 12:45:07 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:33:35.306 12:45:07 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:33:35.306 12:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:35.306 12:45:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:35.306 12:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:35.564 12:45:08 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:33:35.564 12:45:08 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:33:35.564 12:45:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:35.564 12:45:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:35.564 12:45:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:35.564 12:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:35.564 12:45:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:36.131 12:45:08 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:33:36.131 12:45:08 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:36.131 12:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:36.131 12:45:08 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:33:36.131 12:45:08 keyring_file -- keyring/file.sh@105 -- # jq length 00:33:36.131 12:45:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.696 12:45:09 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:33:36.696 12:45:09 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nsopyW1BiX 00:33:36.696 12:45:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nsopyW1BiX 00:33:36.696 12:45:09 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ElgnHceCBg 00:33:36.696 12:45:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ElgnHceCBg 00:33:36.954 12:45:09 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:36.954 12:45:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:37.518 nvme0n1 00:33:37.518 12:45:09 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:33:37.518 12:45:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:37.776 12:45:10 keyring_file -- keyring/file.sh@113 -- # config='{ 00:33:37.776 "subsystems": [ 00:33:37.776 { 00:33:37.776 "subsystem": "keyring", 00:33:37.776 "config": [ 00:33:37.776 { 00:33:37.776 "method": "keyring_file_add_key", 00:33:37.776 "params": { 00:33:37.776 "name": "key0", 00:33:37.776 "path": "/tmp/tmp.nsopyW1BiX" 00:33:37.776 } 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "method": "keyring_file_add_key", 00:33:37.776 "params": { 00:33:37.776 "name": "key1", 00:33:37.776 "path": "/tmp/tmp.ElgnHceCBg" 00:33:37.776 } 00:33:37.776 } 00:33:37.776 ] 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "subsystem": "iobuf", 00:33:37.776 "config": [ 00:33:37.776 { 00:33:37.776 "method": "iobuf_set_options", 00:33:37.776 "params": { 00:33:37.776 "small_pool_count": 8192, 00:33:37.776 "large_pool_count": 1024, 00:33:37.776 "small_bufsize": 8192, 00:33:37.776 "large_bufsize": 135168, 00:33:37.776 "enable_numa": false 00:33:37.776 } 00:33:37.776 } 00:33:37.776 ] 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "subsystem": "sock", 00:33:37.776 "config": [ 00:33:37.776 { 00:33:37.776 "method": "sock_set_default_impl", 00:33:37.776 "params": { 00:33:37.776 "impl_name": "posix" 00:33:37.776 } 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "method": "sock_impl_set_options", 00:33:37.776 "params": { 00:33:37.776 "impl_name": "ssl", 00:33:37.776 "recv_buf_size": 4096, 00:33:37.776 "send_buf_size": 4096, 00:33:37.776 "enable_recv_pipe": true, 00:33:37.776 "enable_quickack": false, 00:33:37.776 "enable_placement_id": 0, 00:33:37.776 "enable_zerocopy_send_server": true, 00:33:37.776 "enable_zerocopy_send_client": false, 00:33:37.776 "zerocopy_threshold": 0, 00:33:37.776 "tls_version": 0, 00:33:37.776 "enable_ktls": false 00:33:37.776 } 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "method": "sock_impl_set_options", 00:33:37.776 "params": { 00:33:37.776 "impl_name": "posix", 00:33:37.776 "recv_buf_size": 2097152, 00:33:37.776 "send_buf_size": 2097152, 00:33:37.776 "enable_recv_pipe": true, 00:33:37.776 "enable_quickack": false, 00:33:37.776 "enable_placement_id": 0, 00:33:37.776 "enable_zerocopy_send_server": true, 00:33:37.776 "enable_zerocopy_send_client": false, 00:33:37.776 "zerocopy_threshold": 0, 00:33:37.776 "tls_version": 0, 00:33:37.776 "enable_ktls": false 00:33:37.776 } 00:33:37.776 } 00:33:37.776 ] 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "subsystem": "vmd", 00:33:37.776 "config": [] 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "subsystem": "accel", 00:33:37.776 "config": [ 00:33:37.776 { 00:33:37.776 "method": "accel_set_options", 00:33:37.776 "params": { 00:33:37.776 "small_cache_size": 128, 00:33:37.776 "large_cache_size": 16, 00:33:37.776 "task_count": 2048, 00:33:37.776 "sequence_count": 2048, 00:33:37.776 "buf_count": 2048 00:33:37.776 } 00:33:37.776 } 00:33:37.776 ] 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "subsystem": "bdev", 00:33:37.776 "config": [ 00:33:37.776 { 00:33:37.776 "method": "bdev_set_options", 00:33:37.776 "params": { 00:33:37.776 "bdev_io_pool_size": 65535, 00:33:37.776 "bdev_io_cache_size": 256, 00:33:37.776 "bdev_auto_examine": true, 00:33:37.776 "iobuf_small_cache_size": 128, 00:33:37.776 "iobuf_large_cache_size": 16 00:33:37.776 } 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "method": "bdev_raid_set_options", 00:33:37.776 "params": { 00:33:37.776 "process_window_size_kb": 1024, 00:33:37.776 "process_max_bandwidth_mb_sec": 0 00:33:37.776 } 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "method": "bdev_iscsi_set_options", 00:33:37.776 "params": { 00:33:37.776 "timeout_sec": 30 00:33:37.776 } 00:33:37.776 }, 00:33:37.776 { 00:33:37.776 "method": "bdev_nvme_set_options", 00:33:37.776 "params": { 00:33:37.776 "action_on_timeout": "none", 00:33:37.776 "timeout_us": 0, 00:33:37.776 "timeout_admin_us": 0, 00:33:37.776 "keep_alive_timeout_ms": 10000, 00:33:37.776 "arbitration_burst": 0, 00:33:37.776 "low_priority_weight": 0, 00:33:37.776 "medium_priority_weight": 0, 00:33:37.776 "high_priority_weight": 0, 00:33:37.776 "nvme_adminq_poll_period_us": 10000, 00:33:37.776 "nvme_ioq_poll_period_us": 0, 00:33:37.776 "io_queue_requests": 512, 00:33:37.776 "delay_cmd_submit": true, 00:33:37.776 "transport_retry_count": 4, 00:33:37.776 "bdev_retry_count": 3, 00:33:37.776 "transport_ack_timeout": 0, 00:33:37.776 "ctrlr_loss_timeout_sec": 0, 00:33:37.776 "reconnect_delay_sec": 0, 00:33:37.776 "fast_io_fail_timeout_sec": 0, 00:33:37.776 "disable_auto_failback": false, 00:33:37.776 "generate_uuids": false, 00:33:37.776 "transport_tos": 0, 00:33:37.776 "nvme_error_stat": false, 00:33:37.776 "rdma_srq_size": 0, 00:33:37.777 "io_path_stat": false, 00:33:37.777 "allow_accel_sequence": false, 00:33:37.777 "rdma_max_cq_size": 0, 00:33:37.777 "rdma_cm_event_timeout_ms": 0, 00:33:37.777 "dhchap_digests": [ 00:33:37.777 "sha256", 00:33:37.777 "sha384", 00:33:37.777 "sha512" 00:33:37.777 ], 00:33:37.777 "dhchap_dhgroups": [ 00:33:37.777 "null", 00:33:37.777 "ffdhe2048", 00:33:37.777 "ffdhe3072", 00:33:37.777 "ffdhe4096", 00:33:37.777 "ffdhe6144", 00:33:37.777 "ffdhe8192" 00:33:37.777 ] 00:33:37.777 } 00:33:37.777 }, 00:33:37.777 { 00:33:37.777 "method": "bdev_nvme_attach_controller", 00:33:37.777 "params": { 00:33:37.777 "name": "nvme0", 00:33:37.777 "trtype": "TCP", 00:33:37.777 "adrfam": "IPv4", 00:33:37.777 "traddr": "127.0.0.1", 00:33:37.777 "trsvcid": "4420", 00:33:37.777 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:37.777 "prchk_reftag": false, 00:33:37.777 "prchk_guard": false, 00:33:37.777 "ctrlr_loss_timeout_sec": 0, 00:33:37.777 "reconnect_delay_sec": 0, 00:33:37.777 "fast_io_fail_timeout_sec": 0, 00:33:37.777 "psk": "key0", 00:33:37.777 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:37.777 "hdgst": false, 00:33:37.777 "ddgst": false, 00:33:37.777 "multipath": "multipath" 00:33:37.777 } 00:33:37.777 }, 00:33:37.777 { 00:33:37.777 "method": "bdev_nvme_set_hotplug", 00:33:37.777 "params": { 00:33:37.777 "period_us": 100000, 00:33:37.777 "enable": false 00:33:37.777 } 00:33:37.777 }, 00:33:37.777 { 00:33:37.777 "method": "bdev_wait_for_examine" 00:33:37.777 } 00:33:37.777 ] 00:33:37.777 }, 00:33:37.777 { 00:33:37.777 "subsystem": "nbd", 00:33:37.777 "config": [] 00:33:37.777 } 00:33:37.777 ] 00:33:37.777 }' 00:33:37.777 12:45:10 keyring_file -- keyring/file.sh@115 -- # killprocess 803042 00:33:37.777 12:45:10 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 803042 ']' 00:33:37.777 12:45:10 keyring_file -- common/autotest_common.sh@956 -- # kill -0 803042 00:33:37.777 12:45:10 keyring_file -- common/autotest_common.sh@957 -- # uname 00:33:37.777 12:45:10 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:37.777 12:45:10 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 803042 00:33:37.777 12:45:10 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:37.777 12:45:10 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:37.777 12:45:10 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 803042' 00:33:37.777 killing process with pid 803042 00:33:37.777 12:45:10 keyring_file -- common/autotest_common.sh@971 -- # kill 803042 00:33:37.777 Received shutdown signal, test time was about 1.000000 seconds 00:33:37.777 00:33:37.777 Latency(us) 00:33:37.777 [2024-10-30T11:45:10.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.777 [2024-10-30T11:45:10.458Z] =================================================================================================================== 00:33:37.777 [2024-10-30T11:45:10.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:37.777 12:45:10 keyring_file -- common/autotest_common.sh@976 -- # wait 803042 00:33:38.035 12:45:10 keyring_file -- keyring/file.sh@118 -- # bperfpid=805253 00:33:38.035 12:45:10 keyring_file -- keyring/file.sh@120 -- # waitforlisten 805253 /var/tmp/bperf.sock 00:33:38.035 12:45:10 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 805253 ']' 00:33:38.035 12:45:10 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.035 12:45:10 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:38.035 12:45:10 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:38.035 12:45:10 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.035 12:45:10 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:38.035 12:45:10 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:33:38.035 "subsystems": [ 00:33:38.035 { 00:33:38.035 "subsystem": "keyring", 00:33:38.035 "config": [ 00:33:38.035 { 00:33:38.035 "method": "keyring_file_add_key", 00:33:38.035 "params": { 00:33:38.035 "name": "key0", 00:33:38.035 "path": "/tmp/tmp.nsopyW1BiX" 00:33:38.035 } 00:33:38.035 }, 00:33:38.035 { 00:33:38.035 "method": "keyring_file_add_key", 00:33:38.035 "params": { 00:33:38.035 "name": "key1", 00:33:38.035 "path": "/tmp/tmp.ElgnHceCBg" 00:33:38.035 } 00:33:38.035 } 00:33:38.035 ] 00:33:38.035 }, 00:33:38.035 { 00:33:38.035 "subsystem": "iobuf", 00:33:38.035 "config": [ 00:33:38.035 { 00:33:38.035 "method": "iobuf_set_options", 00:33:38.035 "params": { 00:33:38.035 "small_pool_count": 8192, 00:33:38.035 "large_pool_count": 1024, 00:33:38.035 "small_bufsize": 8192, 00:33:38.035 "large_bufsize": 135168, 00:33:38.035 "enable_numa": false 00:33:38.035 } 00:33:38.035 } 00:33:38.035 ] 00:33:38.035 }, 00:33:38.035 { 00:33:38.035 "subsystem": "sock", 00:33:38.035 "config": [ 00:33:38.035 { 00:33:38.035 "method": "sock_set_default_impl", 00:33:38.035 "params": { 00:33:38.035 "impl_name": "posix" 00:33:38.035 } 00:33:38.035 }, 00:33:38.035 { 00:33:38.035 "method": "sock_impl_set_options", 00:33:38.035 "params": { 00:33:38.035 "impl_name": "ssl", 00:33:38.035 "recv_buf_size": 4096, 00:33:38.035 "send_buf_size": 4096, 00:33:38.035 "enable_recv_pipe": true, 00:33:38.035 "enable_quickack": false, 00:33:38.035 "enable_placement_id": 0, 00:33:38.035 "enable_zerocopy_send_server": true, 00:33:38.035 "enable_zerocopy_send_client": false, 00:33:38.035 "zerocopy_threshold": 0, 00:33:38.035 "tls_version": 0, 00:33:38.035 "enable_ktls": false 00:33:38.035 } 00:33:38.035 }, 00:33:38.036 { 00:33:38.036 "method": "sock_impl_set_options", 00:33:38.036 "params": { 00:33:38.036 "impl_name": "posix", 00:33:38.036 "recv_buf_size": 2097152, 00:33:38.036 "send_buf_size": 2097152, 00:33:38.036 "enable_recv_pipe": true, 00:33:38.036 "enable_quickack": false, 00:33:38.036 "enable_placement_id": 0, 00:33:38.036 "enable_zerocopy_send_server": true, 00:33:38.036 "enable_zerocopy_send_client": false, 00:33:38.036 "zerocopy_threshold": 0, 00:33:38.036 "tls_version": 0, 00:33:38.036 "enable_ktls": false 00:33:38.036 } 00:33:38.036 } 00:33:38.036 ] 00:33:38.036 }, 00:33:38.036 { 00:33:38.036 "subsystem": "vmd", 00:33:38.036 "config": [] 00:33:38.036 }, 00:33:38.036 { 00:33:38.036 "subsystem": "accel", 00:33:38.036 "config": [ 00:33:38.036 { 00:33:38.036 "method": "accel_set_options", 00:33:38.036 "params": { 00:33:38.036 "small_cache_size": 128, 00:33:38.036 "large_cache_size": 16, 00:33:38.036 "task_count": 2048, 00:33:38.036 "sequence_count": 2048, 00:33:38.036 "buf_count": 2048 00:33:38.036 } 00:33:38.036 } 00:33:38.036 ] 00:33:38.036 }, 00:33:38.036 { 00:33:38.036 "subsystem": "bdev", 00:33:38.036 "config": [ 00:33:38.036 { 00:33:38.036 "method": "bdev_set_options", 00:33:38.036 "params": { 00:33:38.036 "bdev_io_pool_size": 65535, 00:33:38.036 "bdev_io_cache_size": 256, 00:33:38.036 "bdev_auto_examine": true, 00:33:38.036 "iobuf_small_cache_size": 128, 00:33:38.036 "iobuf_large_cache_size": 16 00:33:38.036 } 00:33:38.036 }, 00:33:38.036 { 00:33:38.036 "method": "bdev_raid_set_options", 00:33:38.036 "params": { 00:33:38.036 "process_window_size_kb": 1024, 00:33:38.036 "process_max_bandwidth_mb_sec": 0 00:33:38.036 } 00:33:38.036 }, 00:33:38.036 { 00:33:38.036 "method": "bdev_iscsi_set_options", 00:33:38.036 "params": { 00:33:38.036 "timeout_sec": 30 00:33:38.036 } 00:33:38.036 }, 00:33:38.036 { 00:33:38.036 "method": "bdev_nvme_set_options", 00:33:38.036 "params": { 00:33:38.036 "action_on_timeout": "none", 00:33:38.036 "timeout_us": 0, 00:33:38.036 "timeout_admin_us": 0, 00:33:38.036 "keep_alive_timeout_ms": 10000, 00:33:38.036 "arbitration_burst": 0, 00:33:38.036 "low_priority_weight": 0, 00:33:38.036 "medium_priority_weight": 0, 00:33:38.036 "high_priority_weight": 0, 00:33:38.036 "nvme_adminq_poll_period_us": 10000, 00:33:38.036 "nvme_ioq_poll_period_us": 0, 00:33:38.036 "io_queue_requests": 512, 00:33:38.036 "delay_cmd_submit": true, 00:33:38.036 "transport_retry_count": 4, 00:33:38.036 "bdev_retry_count": 3, 00:33:38.036 "transport_ack_timeout": 0, 00:33:38.036 "ctrlr_loss_timeout_sec": 0, 00:33:38.036 "reconnect_delay_sec": 0, 00:33:38.036 "fast_io_fail_timeout_sec": 0, 00:33:38.036 "disable_auto_failback": false, 00:33:38.036 "generate_uuids": false, 00:33:38.036 "transport_tos": 0, 00:33:38.036 "nvme_error_stat": false, 00:33:38.036 "rdma_srq_size": 0, 00:33:38.036 "io_path_stat": false, 00:33:38.036 "allow_accel_sequence": false, 00:33:38.036 "rdma_max_cq_size": 0, 00:33:38.036 "rdma_cm_event_timeout_ms": 0, 00:33:38.036 "dhchap_digests": [ 00:33:38.036 "sha256", 00:33:38.036 "sha384", 00:33:38.036 "sha512" 00:33:38.036 ], 00:33:38.036 "dhchap_dhgroups": [ 00:33:38.036 "null", 00:33:38.036 "ffdhe2048", 00:33:38.036 "ffdhe3072", 00:33:38.036 "ffdhe4096", 00:33:38.036 "ffdhe6144", 00:33:38.036 "ffdhe8192" 00:33:38.036 ] 00:33:38.036 } 00:33:38.036 }, 00:33:38.036 { 00:33:38.036 "method": "bdev_nvme_attach_controller", 00:33:38.036 "params": { 00:33:38.036 "name": "nvme0", 00:33:38.036 "trtype": "TCP", 00:33:38.036 "adrfam": "IPv4", 00:33:38.036 "traddr": "127.0.0.1", 00:33:38.036 "trsvcid": "4420", 00:33:38.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:38.036 "prchk_reftag": false, 00:33:38.036 "prchk_guard": false, 00:33:38.036 "ctrlr_loss_timeout_sec": 0, 00:33:38.036 "reconnect_delay_sec": 0, 00:33:38.036 "fast_io_fail_timeout_sec": 0, 00:33:38.036 "psk": "key0", 00:33:38.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:38.036 "hdgst": false, 00:33:38.036 "ddgst": false, 00:33:38.036 "multipath": "multipath" 00:33:38.036 } 00:33:38.036 }, 00:33:38.036 { 00:33:38.036 "method": "bdev_nvme_set_hotplug", 00:33:38.036 "params": { 00:33:38.036 "period_us": 100000, 00:33:38.036 "enable": false 00:33:38.036 } 00:33:38.036 }, 00:33:38.036 { 00:33:38.036 "method": "bdev_wait_for_examine" 00:33:38.036 } 00:33:38.036 ] 00:33:38.036 }, 00:33:38.036 { 00:33:38.036 "subsystem": "nbd", 00:33:38.036 "config": [] 00:33:38.036 } 00:33:38.036 ] 00:33:38.036 }' 00:33:38.036 12:45:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:38.036 [2024-10-30 12:45:10.600523] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:33:38.036 [2024-10-30 12:45:10.600616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805253 ] 00:33:38.036 [2024-10-30 12:45:10.666100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.295 [2024-10-30 12:45:10.721921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.295 [2024-10-30 12:45:10.910183] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:38.552 12:45:11 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:38.552 12:45:11 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:33:38.552 12:45:11 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:33:38.552 12:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.552 12:45:11 keyring_file -- keyring/file.sh@121 -- # jq length 00:33:38.810 12:45:11 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:38.810 12:45:11 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:33:38.810 12:45:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:38.810 12:45:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:38.810 12:45:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:38.810 12:45:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:38.810 12:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.069 12:45:11 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:33:39.069 12:45:11 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:33:39.069 12:45:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:39.069 12:45:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:39.069 12:45:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:39.069 12:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.069 12:45:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:39.327 12:45:11 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:33:39.327 12:45:11 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:33:39.327 12:45:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:39.327 12:45:11 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:33:39.604 12:45:12 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:33:39.604 12:45:12 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:39.604 12:45:12 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nsopyW1BiX /tmp/tmp.ElgnHceCBg 00:33:39.604 12:45:12 keyring_file -- keyring/file.sh@20 -- # killprocess 805253 00:33:39.604 12:45:12 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 805253 ']' 00:33:39.604 12:45:12 keyring_file -- common/autotest_common.sh@956 -- # kill -0 805253 00:33:39.604 12:45:12 keyring_file -- common/autotest_common.sh@957 -- # uname 00:33:39.604 12:45:12 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:39.604 12:45:12 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 805253 00:33:39.604 12:45:12 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:39.604 12:45:12 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:39.604 12:45:12 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 805253' 00:33:39.604 killing process with pid 805253 00:33:39.604 12:45:12 keyring_file -- common/autotest_common.sh@971 -- # kill 805253 00:33:39.604 Received shutdown signal, test time was about 1.000000 seconds 00:33:39.604 00:33:39.604 Latency(us) 00:33:39.604 [2024-10-30T11:45:12.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.604 [2024-10-30T11:45:12.285Z] =================================================================================================================== 00:33:39.604 [2024-10-30T11:45:12.285Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:39.604 12:45:12 keyring_file -- common/autotest_common.sh@976 -- # wait 805253 00:33:39.862 12:45:12 keyring_file -- keyring/file.sh@21 -- # killprocess 803033 00:33:39.862 12:45:12 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 803033 ']' 00:33:39.862 12:45:12 keyring_file -- common/autotest_common.sh@956 -- # kill -0 803033 00:33:39.862 12:45:12 keyring_file -- common/autotest_common.sh@957 -- # uname 00:33:39.862 12:45:12 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:39.862 12:45:12 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 803033 00:33:39.862 12:45:12 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:39.862 12:45:12 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:39.862 12:45:12 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 803033' 00:33:39.862 killing process with pid 803033 00:33:39.862 12:45:12 keyring_file -- common/autotest_common.sh@971 -- # kill 803033 00:33:39.862 12:45:12 keyring_file -- common/autotest_common.sh@976 -- # wait 803033 00:33:40.120 00:33:40.120 real 0m14.749s 00:33:40.120 user 0m37.579s 00:33:40.120 sys 0m3.291s 00:33:40.120 12:45:12 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:40.120 12:45:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:40.120 ************************************ 00:33:40.120 END TEST keyring_file 00:33:40.120 ************************************ 00:33:40.120 12:45:12 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:33:40.120 12:45:12 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:40.120 12:45:12 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:40.120 12:45:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:40.120 12:45:12 -- common/autotest_common.sh@10 -- # set +x 00:33:40.379 ************************************ 00:33:40.379 START TEST keyring_linux 00:33:40.379 ************************************ 00:33:40.379 12:45:12 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:40.379 Joined session keyring: 56779529 00:33:40.379 * Looking for test storage... 00:33:40.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:40.379 12:45:12 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:40.379 12:45:12 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:33:40.379 12:45:12 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:40.379 12:45:12 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@345 -- # : 1 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@368 -- # return 0 00:33:40.379 12:45:12 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:40.379 12:45:12 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:40.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.379 --rc genhtml_branch_coverage=1 00:33:40.379 --rc genhtml_function_coverage=1 00:33:40.379 --rc genhtml_legend=1 00:33:40.379 --rc geninfo_all_blocks=1 00:33:40.379 --rc geninfo_unexecuted_blocks=1 00:33:40.379 00:33:40.379 ' 00:33:40.379 12:45:12 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:40.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.379 --rc genhtml_branch_coverage=1 00:33:40.379 --rc genhtml_function_coverage=1 00:33:40.379 --rc genhtml_legend=1 00:33:40.379 --rc geninfo_all_blocks=1 00:33:40.379 --rc geninfo_unexecuted_blocks=1 00:33:40.379 00:33:40.379 ' 00:33:40.379 12:45:12 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:40.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.379 --rc genhtml_branch_coverage=1 00:33:40.379 --rc genhtml_function_coverage=1 00:33:40.379 --rc genhtml_legend=1 00:33:40.379 --rc geninfo_all_blocks=1 00:33:40.379 --rc geninfo_unexecuted_blocks=1 00:33:40.379 00:33:40.379 ' 00:33:40.379 12:45:12 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:40.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.379 --rc genhtml_branch_coverage=1 00:33:40.379 --rc genhtml_function_coverage=1 00:33:40.379 --rc genhtml_legend=1 00:33:40.379 --rc geninfo_all_blocks=1 00:33:40.379 --rc geninfo_unexecuted_blocks=1 00:33:40.379 00:33:40.379 ' 00:33:40.379 12:45:12 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:40.379 12:45:12 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:40.379 12:45:12 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.379 12:45:12 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.379 12:45:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.379 12:45:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.379 12:45:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.380 12:45:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:40.380 12:45:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:40.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:40.380 12:45:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:40.380 12:45:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:40.380 12:45:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:40.380 12:45:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:40.380 12:45:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:40.380 12:45:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:40.380 12:45:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:40.380 12:45:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:40.380 12:45:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:40.380 12:45:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:40.380 12:45:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:40.380 12:45:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:40.380 12:45:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:40.380 12:45:12 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:40.380 12:45:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:40.380 12:45:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:40.380 /tmp/:spdk-test:key0 00:33:40.380 12:45:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:40.380 12:45:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:40.380 12:45:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:40.380 12:45:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:40.380 12:45:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:40.380 12:45:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:40.380 12:45:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:40.380 12:45:13 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:40.380 12:45:13 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.380 12:45:13 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:40.380 12:45:13 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:40.380 12:45:13 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:40.380 12:45:13 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:40.637 12:45:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:40.637 12:45:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:40.637 /tmp/:spdk-test:key1 00:33:40.637 12:45:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=805613 00:33:40.637 12:45:13 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:40.637 12:45:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 805613 00:33:40.637 12:45:13 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 805613 ']' 00:33:40.637 12:45:13 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.637 12:45:13 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:40.637 12:45:13 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.637 12:45:13 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:40.637 12:45:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:40.637 [2024-10-30 12:45:13.116794] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:33:40.637 [2024-10-30 12:45:13.116903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805613 ] 00:33:40.637 [2024-10-30 12:45:13.180987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.637 [2024-10-30 12:45:13.239745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:33:40.894 12:45:13 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:40.894 [2024-10-30 12:45:13.501691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.894 null0 00:33:40.894 [2024-10-30 12:45:13.533757] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:40.894 [2024-10-30 12:45:13.534250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.894 12:45:13 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:40.894 590373433 00:33:40.894 12:45:13 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:40.894 791063419 00:33:40.894 12:45:13 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=805630 00:33:40.894 12:45:13 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:40.894 12:45:13 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 805630 /var/tmp/bperf.sock 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 805630 ']' 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:40.894 12:45:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:41.151 [2024-10-30 12:45:13.599808] Starting SPDK v25.01-pre git sha1 0a41b9e4e / DPDK 24.03.0 initialization... 00:33:41.151 [2024-10-30 12:45:13.599870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805630 ] 00:33:41.151 [2024-10-30 12:45:13.662715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.151 [2024-10-30 12:45:13.718791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.408 12:45:13 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:41.408 12:45:13 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:33:41.408 12:45:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:41.408 12:45:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:41.665 12:45:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:41.665 12:45:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:41.922 12:45:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:41.922 12:45:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:42.178 [2024-10-30 12:45:14.698051] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:42.178 nvme0n1 00:33:42.178 12:45:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:42.178 12:45:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:42.178 12:45:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:42.178 12:45:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:42.178 12:45:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.178 12:45:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:42.435 12:45:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:42.435 12:45:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:42.435 12:45:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:42.435 12:45:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:42.435 12:45:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.435 12:45:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:42.435 12:45:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.691 12:45:15 keyring_linux -- keyring/linux.sh@25 -- # sn=590373433 00:33:42.691 12:45:15 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:42.691 12:45:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:42.691 12:45:15 keyring_linux -- keyring/linux.sh@26 -- # [[ 590373433 == \5\9\0\3\7\3\4\3\3 ]] 00:33:42.691 12:45:15 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 590373433 00:33:42.691 12:45:15 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:42.691 12:45:15 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:42.948 Running I/O for 1 seconds... 00:33:43.881 11130.00 IOPS, 43.48 MiB/s 00:33:43.881 Latency(us) 00:33:43.881 [2024-10-30T11:45:16.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.881 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:43.881 nvme0n1 : 1.01 11136.66 43.50 0.00 0.00 11422.22 8883.77 19903.53 00:33:43.881 [2024-10-30T11:45:16.562Z] =================================================================================================================== 00:33:43.881 [2024-10-30T11:45:16.562Z] Total : 11136.66 43.50 0.00 0.00 11422.22 8883.77 19903.53 00:33:43.881 { 00:33:43.881 "results": [ 00:33:43.881 { 00:33:43.881 "job": "nvme0n1", 00:33:43.881 "core_mask": "0x2", 00:33:43.881 "workload": "randread", 00:33:43.881 "status": "finished", 00:33:43.881 "queue_depth": 128, 00:33:43.881 "io_size": 4096, 00:33:43.881 "runtime": 1.010985, 00:33:43.881 "iops": 11136.663748720308, 00:33:43.881 "mibps": 43.5025927684387, 00:33:43.881 "io_failed": 0, 00:33:43.881 "io_timeout": 0, 00:33:43.881 "avg_latency_us": 11422.21555048965, 00:33:43.881 "min_latency_us": 8883.76888888889, 00:33:43.881 "max_latency_us": 19903.525925925926 00:33:43.881 } 00:33:43.881 ], 00:33:43.881 "core_count": 1 00:33:43.881 } 00:33:43.881 12:45:16 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:43.881 12:45:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:44.139 12:45:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:44.139 12:45:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:44.139 12:45:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:44.139 12:45:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:44.139 12:45:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:44.139 12:45:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.397 12:45:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:44.397 12:45:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:44.397 12:45:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:44.397 12:45:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:44.397 12:45:17 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:33:44.397 12:45:17 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:44.397 12:45:17 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:44.397 12:45:17 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:44.397 12:45:17 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:44.397 12:45:17 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:44.397 12:45:17 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:44.397 12:45:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:44.656 [2024-10-30 12:45:17.298284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:44.656 [2024-10-30 12:45:17.298624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f20a0 (107): Transport endpoint is not connected 00:33:44.656 [2024-10-30 12:45:17.299618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f20a0 (9): Bad file descriptor 00:33:44.656 [2024-10-30 12:45:17.300617] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:44.656 [2024-10-30 12:45:17.300643] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:44.656 [2024-10-30 12:45:17.300656] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:44.656 [2024-10-30 12:45:17.300670] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:44.656 request: 00:33:44.656 { 00:33:44.656 "name": "nvme0", 00:33:44.656 "trtype": "tcp", 00:33:44.656 "traddr": "127.0.0.1", 00:33:44.656 "adrfam": "ipv4", 00:33:44.656 "trsvcid": "4420", 00:33:44.656 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:44.656 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:44.656 "prchk_reftag": false, 00:33:44.656 "prchk_guard": false, 00:33:44.656 "hdgst": false, 00:33:44.656 "ddgst": false, 00:33:44.656 "psk": ":spdk-test:key1", 00:33:44.656 "allow_unrecognized_csi": false, 00:33:44.656 "method": "bdev_nvme_attach_controller", 00:33:44.656 "req_id": 1 00:33:44.656 } 00:33:44.656 Got JSON-RPC error response 00:33:44.656 response: 00:33:44.656 { 00:33:44.656 "code": -5, 00:33:44.656 "message": "Input/output error" 00:33:44.656 } 00:33:44.656 12:45:17 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:33:44.656 12:45:17 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:44.656 12:45:17 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:44.656 12:45:17 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@33 -- # sn=590373433 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 590373433 00:33:44.656 1 links removed 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@33 -- # sn=791063419 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 791063419 00:33:44.656 1 links removed 00:33:44.656 12:45:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 805630 00:33:44.656 12:45:17 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 805630 ']' 00:33:44.656 12:45:17 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 805630 00:33:44.656 12:45:17 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:33:44.656 12:45:17 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:44.914 12:45:17 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 805630 00:33:44.914 12:45:17 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:44.914 12:45:17 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:44.914 12:45:17 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 805630' 00:33:44.914 killing process with pid 805630 00:33:44.914 12:45:17 keyring_linux -- common/autotest_common.sh@971 -- # kill 805630 00:33:44.914 Received shutdown signal, test time was about 1.000000 seconds 00:33:44.914 00:33:44.914 Latency(us) 00:33:44.914 [2024-10-30T11:45:17.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.914 [2024-10-30T11:45:17.595Z] =================================================================================================================== 00:33:44.914 [2024-10-30T11:45:17.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.914 12:45:17 keyring_linux -- common/autotest_common.sh@976 -- # wait 805630 00:33:44.914 12:45:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 805613 00:33:44.914 12:45:17 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 805613 ']' 00:33:44.914 12:45:17 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 805613 00:33:44.914 12:45:17 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:33:44.915 12:45:17 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:44.915 12:45:17 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 805613 00:33:45.172 12:45:17 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:45.172 12:45:17 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:45.172 12:45:17 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 805613' 00:33:45.172 killing process with pid 805613 00:33:45.172 12:45:17 keyring_linux -- common/autotest_common.sh@971 -- # kill 805613 00:33:45.172 12:45:17 keyring_linux -- common/autotest_common.sh@976 -- # wait 805613 00:33:45.432 00:33:45.432 real 0m5.228s 00:33:45.432 user 0m10.425s 00:33:45.432 sys 0m1.579s 00:33:45.432 12:45:18 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:45.432 12:45:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:45.432 ************************************ 00:33:45.432 END TEST keyring_linux 00:33:45.432 ************************************ 00:33:45.432 12:45:18 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:45.432 12:45:18 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:45.432 12:45:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:45.432 12:45:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:45.432 12:45:18 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:45.432 12:45:18 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:45.432 12:45:18 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:45.432 12:45:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:45.432 12:45:18 -- common/autotest_common.sh@10 -- # set +x 00:33:45.432 12:45:18 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:45.432 12:45:18 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:33:45.432 12:45:18 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:33:45.432 12:45:18 -- common/autotest_common.sh@10 -- # set +x 00:33:47.961 INFO: APP EXITING 00:33:47.961 INFO: killing all VMs 00:33:47.961 INFO: killing vhost app 00:33:47.961 INFO: EXIT DONE 00:33:48.527 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:33:48.527 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:48.527 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:48.527 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:48.784 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:48.784 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:48.784 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:48.784 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:48.784 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:48.784 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:48.785 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:48.785 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:48.785 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:48.785 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:48.785 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:48.785 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:48.785 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:50.158 Cleaning 00:33:50.158 Removing: /var/run/dpdk/spdk0/config 00:33:50.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:50.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:50.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:50.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:50.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:50.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:50.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:50.158 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:50.158 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:50.158 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:50.158 Removing: /var/run/dpdk/spdk1/config 00:33:50.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:50.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:50.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:50.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:50.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:50.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:50.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:50.158 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:50.158 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:50.158 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:50.158 Removing: /var/run/dpdk/spdk2/config 00:33:50.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:50.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:50.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:50.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:50.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:50.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:50.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:50.158 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:50.158 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:50.158 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:50.158 Removing: /var/run/dpdk/spdk3/config 00:33:50.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:50.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:50.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:50.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:50.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:50.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:50.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:50.158 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:50.158 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:50.158 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:50.158 Removing: /var/run/dpdk/spdk4/config 00:33:50.158 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:50.158 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:50.158 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:50.158 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:50.158 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:50.158 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:50.416 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:50.416 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:50.416 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:50.416 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:50.416 Removing: /dev/shm/bdev_svc_trace.1 00:33:50.416 Removing: /dev/shm/nvmf_trace.0 00:33:50.416 Removing: /dev/shm/spdk_tgt_trace.pid483949 00:33:50.416 Removing: /var/run/dpdk/spdk0 00:33:50.416 Removing: /var/run/dpdk/spdk1 00:33:50.416 Removing: /var/run/dpdk/spdk2 00:33:50.416 Removing: /var/run/dpdk/spdk3 00:33:50.416 Removing: /var/run/dpdk/spdk4 00:33:50.416 Removing: /var/run/dpdk/spdk_pid482372 00:33:50.416 Removing: /var/run/dpdk/spdk_pid483110 00:33:50.416 Removing: /var/run/dpdk/spdk_pid483949 00:33:50.416 Removing: /var/run/dpdk/spdk_pid484390 00:33:50.416 Removing: /var/run/dpdk/spdk_pid485082 00:33:50.416 Removing: /var/run/dpdk/spdk_pid485222 00:33:50.416 Removing: /var/run/dpdk/spdk_pid485935 00:33:50.416 Removing: /var/run/dpdk/spdk_pid485960 00:33:50.416 Removing: /var/run/dpdk/spdk_pid486240 00:33:50.416 Removing: /var/run/dpdk/spdk_pid487535 00:33:50.416 Removing: /var/run/dpdk/spdk_pid488450 00:33:50.417 Removing: /var/run/dpdk/spdk_pid488768 00:33:50.417 Removing: /var/run/dpdk/spdk_pid488966 00:33:50.417 Removing: /var/run/dpdk/spdk_pid489182 00:33:50.417 Removing: /var/run/dpdk/spdk_pid489378 00:33:50.417 Removing: /var/run/dpdk/spdk_pid489648 00:33:50.417 Removing: /var/run/dpdk/spdk_pid489805 00:33:50.417 Removing: /var/run/dpdk/spdk_pid490004 00:33:50.417 Removing: /var/run/dpdk/spdk_pid490195 00:33:50.417 Removing: /var/run/dpdk/spdk_pid492685 00:33:50.417 Removing: /var/run/dpdk/spdk_pid492855 00:33:50.417 Removing: /var/run/dpdk/spdk_pid493021 00:33:50.417 Removing: /var/run/dpdk/spdk_pid493139 00:33:50.417 Removing: /var/run/dpdk/spdk_pid493455 00:33:50.417 Removing: /var/run/dpdk/spdk_pid493573 00:33:50.417 Removing: /var/run/dpdk/spdk_pid493889 00:33:50.417 Removing: /var/run/dpdk/spdk_pid494013 00:33:50.417 Removing: /var/run/dpdk/spdk_pid494187 00:33:50.417 Removing: /var/run/dpdk/spdk_pid494313 00:33:50.417 Removing: /var/run/dpdk/spdk_pid494475 00:33:50.417 Removing: /var/run/dpdk/spdk_pid494491 00:33:50.417 Removing: /var/run/dpdk/spdk_pid494984 00:33:50.417 Removing: /var/run/dpdk/spdk_pid495136 00:33:50.417 Removing: /var/run/dpdk/spdk_pid495345 00:33:50.417 Removing: /var/run/dpdk/spdk_pid497478 00:33:50.417 Removing: /var/run/dpdk/spdk_pid500097 00:33:50.417 Removing: /var/run/dpdk/spdk_pid507725 00:33:50.417 Removing: /var/run/dpdk/spdk_pid508133 00:33:50.417 Removing: /var/run/dpdk/spdk_pid510658 00:33:50.417 Removing: /var/run/dpdk/spdk_pid510930 00:33:50.417 Removing: /var/run/dpdk/spdk_pid513458 00:33:50.417 Removing: /var/run/dpdk/spdk_pid517302 00:33:50.417 Removing: /var/run/dpdk/spdk_pid519381 00:33:50.417 Removing: /var/run/dpdk/spdk_pid525801 00:33:50.417 Removing: /var/run/dpdk/spdk_pid531054 00:33:50.417 Removing: /var/run/dpdk/spdk_pid532376 00:33:50.417 Removing: /var/run/dpdk/spdk_pid533043 00:33:50.417 Removing: /var/run/dpdk/spdk_pid544046 00:33:50.417 Removing: /var/run/dpdk/spdk_pid546347 00:33:50.417 Removing: /var/run/dpdk/spdk_pid573866 00:33:50.417 Removing: /var/run/dpdk/spdk_pid577160 00:33:50.417 Removing: /var/run/dpdk/spdk_pid581508 00:33:50.417 Removing: /var/run/dpdk/spdk_pid585883 00:33:50.417 Removing: /var/run/dpdk/spdk_pid585885 00:33:50.417 Removing: /var/run/dpdk/spdk_pid586545 00:33:50.417 Removing: /var/run/dpdk/spdk_pid587085 00:33:50.417 Removing: /var/run/dpdk/spdk_pid587741 00:33:50.417 Removing: /var/run/dpdk/spdk_pid588139 00:33:50.417 Removing: /var/run/dpdk/spdk_pid588147 00:33:50.417 Removing: /var/run/dpdk/spdk_pid588400 00:33:50.417 Removing: /var/run/dpdk/spdk_pid588525 00:33:50.417 Removing: /var/run/dpdk/spdk_pid588537 00:33:50.417 Removing: /var/run/dpdk/spdk_pid589123 00:33:50.417 Removing: /var/run/dpdk/spdk_pid589740 00:33:50.417 Removing: /var/run/dpdk/spdk_pid590407 00:33:50.417 Removing: /var/run/dpdk/spdk_pid590817 00:33:50.417 Removing: /var/run/dpdk/spdk_pid590824 00:33:50.417 Removing: /var/run/dpdk/spdk_pid591085 00:33:50.417 Removing: /var/run/dpdk/spdk_pid591972 00:33:50.417 Removing: /var/run/dpdk/spdk_pid592708 00:33:50.417 Removing: /var/run/dpdk/spdk_pid598050 00:33:50.417 Removing: /var/run/dpdk/spdk_pid625967 00:33:50.417 Removing: /var/run/dpdk/spdk_pid628893 00:33:50.417 Removing: /var/run/dpdk/spdk_pid630567 00:33:50.417 Removing: /var/run/dpdk/spdk_pid632009 00:33:50.417 Removing: /var/run/dpdk/spdk_pid632164 00:33:50.417 Removing: /var/run/dpdk/spdk_pid632306 00:33:50.417 Removing: /var/run/dpdk/spdk_pid632449 00:33:50.417 Removing: /var/run/dpdk/spdk_pid632892 00:33:50.417 Removing: /var/run/dpdk/spdk_pid634216 00:33:50.417 Removing: /var/run/dpdk/spdk_pid635064 00:33:50.417 Removing: /var/run/dpdk/spdk_pid635493 00:33:50.417 Removing: /var/run/dpdk/spdk_pid637000 00:33:50.417 Removing: /var/run/dpdk/spdk_pid637415 00:33:50.417 Removing: /var/run/dpdk/spdk_pid637974 00:33:50.417 Removing: /var/run/dpdk/spdk_pid640361 00:33:50.417 Removing: /var/run/dpdk/spdk_pid643656 00:33:50.676 Removing: /var/run/dpdk/spdk_pid643657 00:33:50.676 Removing: /var/run/dpdk/spdk_pid643658 00:33:50.676 Removing: /var/run/dpdk/spdk_pid645879 00:33:50.676 Removing: /var/run/dpdk/spdk_pid650727 00:33:50.676 Removing: /var/run/dpdk/spdk_pid653506 00:33:50.676 Removing: /var/run/dpdk/spdk_pid657272 00:33:50.676 Removing: /var/run/dpdk/spdk_pid658214 00:33:50.676 Removing: /var/run/dpdk/spdk_pid659195 00:33:50.676 Removing: /var/run/dpdk/spdk_pid660284 00:33:50.676 Removing: /var/run/dpdk/spdk_pid663671 00:33:50.676 Removing: /var/run/dpdk/spdk_pid666253 00:33:50.676 Removing: /var/run/dpdk/spdk_pid668620 00:33:50.676 Removing: /var/run/dpdk/spdk_pid672856 00:33:50.676 Removing: /var/run/dpdk/spdk_pid672860 00:33:50.676 Removing: /var/run/dpdk/spdk_pid675641 00:33:50.676 Removing: /var/run/dpdk/spdk_pid675896 00:33:50.676 Removing: /var/run/dpdk/spdk_pid676032 00:33:50.676 Removing: /var/run/dpdk/spdk_pid676301 00:33:50.676 Removing: /var/run/dpdk/spdk_pid676426 00:33:50.676 Removing: /var/run/dpdk/spdk_pid679205 00:33:50.676 Removing: /var/run/dpdk/spdk_pid679532 00:33:50.676 Removing: /var/run/dpdk/spdk_pid682204 00:33:50.676 Removing: /var/run/dpdk/spdk_pid684177 00:33:50.676 Removing: /var/run/dpdk/spdk_pid687601 00:33:50.676 Removing: /var/run/dpdk/spdk_pid691068 00:33:50.676 Removing: /var/run/dpdk/spdk_pid697581 00:33:50.676 Removing: /var/run/dpdk/spdk_pid702669 00:33:50.676 Removing: /var/run/dpdk/spdk_pid702680 00:33:50.676 Removing: /var/run/dpdk/spdk_pid715049 00:33:50.676 Removing: /var/run/dpdk/spdk_pid715568 00:33:50.676 Removing: /var/run/dpdk/spdk_pid715983 00:33:50.676 Removing: /var/run/dpdk/spdk_pid716395 00:33:50.676 Removing: /var/run/dpdk/spdk_pid716971 00:33:50.676 Removing: /var/run/dpdk/spdk_pid717385 00:33:50.676 Removing: /var/run/dpdk/spdk_pid717910 00:33:50.676 Removing: /var/run/dpdk/spdk_pid718317 00:33:50.676 Removing: /var/run/dpdk/spdk_pid720823 00:33:50.676 Removing: /var/run/dpdk/spdk_pid720971 00:33:50.676 Removing: /var/run/dpdk/spdk_pid724772 00:33:50.676 Removing: /var/run/dpdk/spdk_pid724937 00:33:50.676 Removing: /var/run/dpdk/spdk_pid728257 00:33:50.676 Removing: /var/run/dpdk/spdk_pid730809 00:33:50.676 Removing: /var/run/dpdk/spdk_pid738346 00:33:50.676 Removing: /var/run/dpdk/spdk_pid738864 00:33:50.676 Removing: /var/run/dpdk/spdk_pid741255 00:33:50.676 Removing: /var/run/dpdk/spdk_pid741522 00:33:50.676 Removing: /var/run/dpdk/spdk_pid744032 00:33:50.676 Removing: /var/run/dpdk/spdk_pid747833 00:33:50.676 Removing: /var/run/dpdk/spdk_pid749900 00:33:50.676 Removing: /var/run/dpdk/spdk_pid756273 00:33:50.676 Removing: /var/run/dpdk/spdk_pid761471 00:33:50.676 Removing: /var/run/dpdk/spdk_pid762660 00:33:50.676 Removing: /var/run/dpdk/spdk_pid763313 00:33:50.676 Removing: /var/run/dpdk/spdk_pid774164 00:33:50.676 Removing: /var/run/dpdk/spdk_pid776472 00:33:50.676 Removing: /var/run/dpdk/spdk_pid778385 00:33:50.676 Removing: /var/run/dpdk/spdk_pid783430 00:33:50.676 Removing: /var/run/dpdk/spdk_pid783441 00:33:50.676 Removing: /var/run/dpdk/spdk_pid786342 00:33:50.676 Removing: /var/run/dpdk/spdk_pid787743 00:33:50.676 Removing: /var/run/dpdk/spdk_pid789189 00:33:50.676 Removing: /var/run/dpdk/spdk_pid790006 00:33:50.676 Removing: /var/run/dpdk/spdk_pid791407 00:33:50.676 Removing: /var/run/dpdk/spdk_pid792285 00:33:50.676 Removing: /var/run/dpdk/spdk_pid797631 00:33:50.676 Removing: /var/run/dpdk/spdk_pid797961 00:33:50.676 Removing: /var/run/dpdk/spdk_pid798353 00:33:50.676 Removing: /var/run/dpdk/spdk_pid799907 00:33:50.676 Removing: /var/run/dpdk/spdk_pid800307 00:33:50.676 Removing: /var/run/dpdk/spdk_pid800587 00:33:50.676 Removing: /var/run/dpdk/spdk_pid803033 00:33:50.676 Removing: /var/run/dpdk/spdk_pid803042 00:33:50.676 Removing: /var/run/dpdk/spdk_pid805253 00:33:50.676 Removing: /var/run/dpdk/spdk_pid805613 00:33:50.676 Removing: /var/run/dpdk/spdk_pid805630 00:33:50.676 Clean 00:33:50.935 12:45:23 -- common/autotest_common.sh@1451 -- # return 0 00:33:50.935 12:45:23 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:50.935 12:45:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.935 12:45:23 -- common/autotest_common.sh@10 -- # set +x 00:33:50.935 12:45:23 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:50.935 12:45:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:50.935 12:45:23 -- common/autotest_common.sh@10 -- # set +x 00:33:50.935 12:45:23 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:50.935 12:45:23 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:50.935 12:45:23 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:50.935 12:45:23 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:50.935 12:45:23 -- spdk/autotest.sh@394 -- # hostname 00:33:50.935 12:45:23 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:50.935 geninfo: WARNING: invalid characters removed from testname! 00:34:23.026 12:45:53 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:25.555 12:45:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:28.838 12:46:01 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:32.117 12:46:04 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:34.693 12:46:07 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:37.974 12:46:10 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:40.502 12:46:13 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:40.502 12:46:13 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:40.502 12:46:13 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:34:40.502 12:46:13 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:40.502 12:46:13 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:40.502 12:46:13 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:40.503 + [[ -n 411767 ]] 00:34:40.503 + sudo kill 411767 00:34:40.774 [Pipeline] } 00:34:40.789 [Pipeline] // stage 00:34:40.794 [Pipeline] } 00:34:40.809 [Pipeline] // timeout 00:34:40.814 [Pipeline] } 00:34:40.830 [Pipeline] // catchError 00:34:40.835 [Pipeline] } 00:34:40.850 [Pipeline] // wrap 00:34:40.856 [Pipeline] } 00:34:40.869 [Pipeline] // catchError 00:34:40.879 [Pipeline] stage 00:34:40.881 [Pipeline] { (Epilogue) 00:34:40.894 [Pipeline] catchError 00:34:40.896 [Pipeline] { 00:34:40.909 [Pipeline] echo 00:34:40.911 Cleanup processes 00:34:40.917 [Pipeline] sh 00:34:41.207 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:41.207 816323 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:41.223 [Pipeline] sh 00:34:41.511 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:41.511 ++ awk '{print $1}' 00:34:41.511 ++ grep -v 'sudo pgrep' 00:34:41.511 + sudo kill -9 00:34:41.511 + true 00:34:41.524 [Pipeline] sh 00:34:41.810 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:51.798 [Pipeline] sh 00:34:52.087 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:52.087 Artifacts sizes are good 00:34:52.104 [Pipeline] archiveArtifacts 00:34:52.112 Archiving artifacts 00:34:52.272 [Pipeline] sh 00:34:52.557 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:52.574 [Pipeline] cleanWs 00:34:52.584 [WS-CLEANUP] Deleting project workspace... 00:34:52.584 [WS-CLEANUP] Deferred wipeout is used... 00:34:52.591 [WS-CLEANUP] done 00:34:52.594 [Pipeline] } 00:34:52.614 [Pipeline] // catchError 00:34:52.626 [Pipeline] sh 00:34:52.923 + logger -p user.info -t JENKINS-CI 00:34:52.932 [Pipeline] } 00:34:52.945 [Pipeline] // stage 00:34:52.950 [Pipeline] } 00:34:52.963 [Pipeline] // node 00:34:52.968 [Pipeline] End of Pipeline 00:34:53.003 Finished: SUCCESS